Test Report: Docker_Windows 18634

                    
                      743ee2f6c19b1c9aeee0e19f36a4d6af542f1699:2024-04-15:34041
                    
                

Test fail (5/345)

Order failed test Duration
64 TestErrorSpam/setup 65.1
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 6.24
95 TestFunctional/parallel/ConfigCmd 1.57
303 TestPause/serial/PauseAgain 45.97
383 TestStartStop/group/old-k8s-version/serial/SecondStart 422.02
x
+
TestErrorSpam/setup (65.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-452000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-452000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 --driver=docker: (1m5.0991512s)
error_spam_test.go:96: unexpected stderr: "W0415 17:50:21.689874    7256 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-452000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=18634
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-452000" primary control-plane node in "nospam-452000" cluster
* Pulling base image v0.0.43-1713176859-18634 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-452000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0415 17:50:21.689874    7256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (65.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (6.24s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-662500
helpers_test.go:235: (dbg) docker inspect functional-662500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e3ec86d968fe6c8a94413d3e09fa5b8330af4213f3eeaa487a0b3c9195be2fc7",
	        "Created": "2024-04-15T17:52:36.53576702Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 22336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-15T17:52:37.067090272Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06fc94f477def8d6ec1f9decaa8d9de4b332d5597cd1759a7075056e46e00dfc",
	        "ResolvConfPath": "/var/lib/docker/containers/e3ec86d968fe6c8a94413d3e09fa5b8330af4213f3eeaa487a0b3c9195be2fc7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e3ec86d968fe6c8a94413d3e09fa5b8330af4213f3eeaa487a0b3c9195be2fc7/hostname",
	        "HostsPath": "/var/lib/docker/containers/e3ec86d968fe6c8a94413d3e09fa5b8330af4213f3eeaa487a0b3c9195be2fc7/hosts",
	        "LogPath": "/var/lib/docker/containers/e3ec86d968fe6c8a94413d3e09fa5b8330af4213f3eeaa487a0b3c9195be2fc7/e3ec86d968fe6c8a94413d3e09fa5b8330af4213f3eeaa487a0b3c9195be2fc7-json.log",
	        "Name": "/functional-662500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-662500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-662500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d93103f0392b6c8c59e92f601004580160238ed7598c44affbda192da297731-init/diff:/var/lib/docker/overlay2/7d5cfefbd46c2f94744068cb810a43a2057da1935809c9054bd8d457b0f559e7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d93103f0392b6c8c59e92f601004580160238ed7598c44affbda192da297731/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d93103f0392b6c8c59e92f601004580160238ed7598c44affbda192da297731/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d93103f0392b6c8c59e92f601004580160238ed7598c44affbda192da297731/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-662500",
	                "Source": "/var/lib/docker/volumes/functional-662500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-662500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-662500",
	                "name.minikube.sigs.k8s.io": "functional-662500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ebf92b3791302ffacbd5d4429dbe1fa78fd727edfe54e8e05755f66991bb10c",
	            "SandboxKey": "/var/run/docker/netns/4ebf92b37913",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51311"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51312"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51313"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51314"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51310"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-662500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "b60a81d0e7dfd464215556f27690e6a367fe4433503e865f952e7387dfab08ad",
	                    "EndpointID": "871ef8cfdf7ced04c1aadd671244a674838bffd32c833eb4359988d30d91508e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-662500",
	                        "e3ec86d968fe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-662500 -n functional-662500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-662500 -n functional-662500: (1.3041322s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 logs -n 25: (2.5536289s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| pause   | nospam-452000 --log_dir                                     | nospam-452000     | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:51 UTC | 15 Apr 24 17:51 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 |                   |                   |                |                     |                     |
	|         | pause                                                       |                   |                   |                |                     |                     |
	| unpause | nospam-452000 --log_dir                                     | nospam-452000     | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:51 UTC | 15 Apr 24 17:51 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-452000 --log_dir                                     | nospam-452000     | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:51 UTC | 15 Apr 24 17:51 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-452000 --log_dir                                     | nospam-452000     | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:51 UTC | 15 Apr 24 17:51 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-452000 --log_dir                                     | nospam-452000     | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:51 UTC | 15 Apr 24 17:51 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-452000 --log_dir                                     | nospam-452000     | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:51 UTC | 15 Apr 24 17:51 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-452000 --log_dir                                     | nospam-452000     | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:51 UTC | 15 Apr 24 17:52 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| delete  | -p nospam-452000                                            | nospam-452000     | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:52 UTC | 15 Apr 24 17:52 UTC |
	| start   | -p functional-662500                                        | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:52 UTC | 15 Apr 24 17:53 UTC |
	|         | --memory=4000                                               |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |                |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |                |                     |                     |
	| start   | -p functional-662500                                        | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:53 UTC | 15 Apr 24 17:54 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |                |                     |                     |
	| cache   | functional-662500 cache add                                 | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | functional-662500 cache add                                 | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | functional-662500 cache add                                 | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-662500 cache add                                 | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | minikube-local-cache-test:functional-662500                 |                   |                   |                |                     |                     |
	| cache   | functional-662500 cache delete                              | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | minikube-local-cache-test:functional-662500                 |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | list                                                        | minikube          | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	| ssh     | functional-662500 ssh sudo                                  | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | crictl images                                               |                   |                   |                |                     |                     |
	| ssh     | functional-662500                                           | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| ssh     | functional-662500 ssh                                       | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-662500 cache reload                              | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	| ssh     | functional-662500 ssh                                       | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| kubectl | functional-662500 kubectl --                                | functional-662500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:54 UTC | 15 Apr 24 17:54 UTC |
	|         | --context functional-662500                                 |                   |                   |                |                     |                     |
	|         | get pods                                                    |                   |                   |                |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 17:53:27
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 17:53:27.709880    1700 out.go:291] Setting OutFile to fd 736 ...
	I0415 17:53:27.710866    1700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:53:27.710866    1700 out.go:304] Setting ErrFile to fd 700...
	I0415 17:53:27.710866    1700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:53:27.732127    1700 out.go:298] Setting JSON to false
	I0415 17:53:27.734837    1700 start.go:129] hostinfo: {"hostname":"minikube4","uptime":20077,"bootTime":1713183529,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 17:53:27.736205    1700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:53:27.740299    1700 out.go:177] * [functional-662500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:53:27.742810    1700 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:53:27.742263    1700 notify.go:220] Checking for updates...
	I0415 17:53:27.745412    1700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 17:53:27.747455    1700 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 17:53:27.749523    1700 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 17:53:27.751508    1700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 17:53:27.755135    1700 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:53:27.755135    1700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:53:28.042450    1700 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:53:28.052636    1700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:53:28.386993    1700 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:86 SystemTime:2024-04-15 17:53:28.345328057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:53:28.392065    1700 out.go:177] * Using the docker driver based on existing profile
	I0415 17:53:28.394020    1700 start.go:297] selected driver: docker
	I0415 17:53:28.394059    1700 start.go:901] validating driver "docker" against &{Name:functional-662500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-662500 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:53:28.394163    1700 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 17:53:28.416284    1700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:53:28.746412    1700 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:86 SystemTime:2024-04-15 17:53:28.707773153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:53:28.851167    1700 cni.go:84] Creating CNI manager for ""
	I0415 17:53:28.851167    1700 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:53:28.851167    1700 start.go:340] cluster config:
	{Name:functional-662500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-662500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:53:28.855666    1700 out.go:177] * Starting "functional-662500" primary control-plane node in "functional-662500" cluster
	I0415 17:53:28.858794    1700 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 17:53:28.862532    1700 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 17:53:28.864983    1700 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:53:28.865040    1700 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 17:53:28.865193    1700 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 17:53:28.865193    1700 cache.go:56] Caching tarball of preloaded images
	I0415 17:53:28.865300    1700 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 17:53:28.865300    1700 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 17:53:28.865984    1700 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\config.json ...
	I0415 17:53:29.040577    1700 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 17:53:29.040638    1700 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 17:53:29.040714    1700 cache.go:194] Successfully downloaded all kic artifacts
	I0415 17:53:29.040714    1700 start.go:360] acquireMachinesLock for functional-662500: {Name:mk8888991f48fe93f7be62e11dfecdcc22d86d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 17:53:29.040714    1700 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-662500"
	I0415 17:53:29.041305    1700 start.go:96] Skipping create...Using existing machine configuration
	I0415 17:53:29.041305    1700 fix.go:54] fixHost starting: 
	I0415 17:53:29.059564    1700 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
	I0415 17:53:29.221596    1700 fix.go:112] recreateIfNeeded on functional-662500: state=Running err=<nil>
	W0415 17:53:29.221596    1700 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 17:53:29.229645    1700 out.go:177] * Updating the running docker "functional-662500" container ...
	I0415 17:53:29.232282    1700 machine.go:94] provisionDockerMachine start ...
	I0415 17:53:29.240737    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:29.403873    1700 main.go:141] libmachine: Using SSH client type: native
	I0415 17:53:29.404421    1700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 51311 <nil> <nil>}
	I0415 17:53:29.404688    1700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 17:53:29.583201    1700 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-662500
	
	I0415 17:53:29.583734    1700 ubuntu.go:169] provisioning hostname "functional-662500"
	I0415 17:53:29.597955    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:29.758983    1700 main.go:141] libmachine: Using SSH client type: native
	I0415 17:53:29.759469    1700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 51311 <nil> <nil>}
	I0415 17:53:29.759469    1700 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-662500 && echo "functional-662500" | sudo tee /etc/hostname
	I0415 17:53:29.951098    1700 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-662500
	
	I0415 17:53:29.962608    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:30.131208    1700 main.go:141] libmachine: Using SSH client type: native
	I0415 17:53:30.131730    1700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 51311 <nil> <nil>}
	I0415 17:53:30.131730    1700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-662500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-662500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-662500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 17:53:30.301535    1700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 17:53:30.301707    1700 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0415 17:53:30.301707    1700 ubuntu.go:177] setting up certificates
	I0415 17:53:30.301707    1700 provision.go:84] configureAuth start
	I0415 17:53:30.312058    1700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-662500
	I0415 17:53:30.465840    1700 provision.go:143] copyHostCerts
	I0415 17:53:30.465870    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I0415 17:53:30.465870    1700 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0415 17:53:30.465870    1700 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0415 17:53:30.466745    1700 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0415 17:53:30.467679    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I0415 17:53:30.467803    1700 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0415 17:53:30.467803    1700 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0415 17:53:30.467803    1700 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 17:53:30.469199    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I0415 17:53:30.469376    1700 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0415 17:53:30.469482    1700 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0415 17:53:30.469874    1700 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 17:53:30.470514    1700 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-662500 san=[127.0.0.1 192.168.49.2 functional-662500 localhost minikube]
	I0415 17:53:30.749303    1700 provision.go:177] copyRemoteCerts
	I0415 17:53:30.761805    1700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 17:53:30.785383    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:30.968007    1700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
	I0415 17:53:31.089306    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0415 17:53:31.089306    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0415 17:53:31.132341    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0415 17:53:31.132341    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0415 17:53:31.176504    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0415 17:53:31.177097    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 17:53:31.217829    1700 provision.go:87] duration metric: took 916.0783ms to configureAuth
	I0415 17:53:31.217829    1700 ubuntu.go:193] setting minikube options for container-runtime
	I0415 17:53:31.218597    1700 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:53:31.227352    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:31.393221    1700 main.go:141] libmachine: Using SSH client type: native
	I0415 17:53:31.393710    1700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 51311 <nil> <nil>}
	I0415 17:53:31.393760    1700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 17:53:31.562915    1700 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0415 17:53:31.562915    1700 ubuntu.go:71] root file system type: overlay
	I0415 17:53:31.562915    1700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 17:53:31.572837    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:31.736931    1700 main.go:141] libmachine: Using SSH client type: native
	I0415 17:53:31.736931    1700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 51311 <nil> <nil>}
	I0415 17:53:31.736931    1700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 17:53:31.927992    1700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 17:53:31.940037    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:32.108163    1700 main.go:141] libmachine: Using SSH client type: native
	I0415 17:53:32.108974    1700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 51311 <nil> <nil>}
	I0415 17:53:32.108974    1700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 17:53:32.289091    1700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 17:53:32.289091    1700 machine.go:97] duration metric: took 3.0566303s to provisionDockerMachine
	I0415 17:53:32.289091    1700 start.go:293] postStartSetup for "functional-662500" (driver="docker")
	I0415 17:53:32.289091    1700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 17:53:32.304023    1700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 17:53:32.313395    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:32.479329    1700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
	I0415 17:53:32.642048    1700 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 17:53:32.652928    1700 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0415 17:53:32.652985    1700 command_runner.go:130] > NAME="Ubuntu"
	I0415 17:53:32.652985    1700 command_runner.go:130] > VERSION_ID="22.04"
	I0415 17:53:32.652985    1700 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0415 17:53:32.652985    1700 command_runner.go:130] > VERSION_CODENAME=jammy
	I0415 17:53:32.652985    1700 command_runner.go:130] > ID=ubuntu
	I0415 17:53:32.652985    1700 command_runner.go:130] > ID_LIKE=debian
	I0415 17:53:32.652985    1700 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0415 17:53:32.652985    1700 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0415 17:53:32.652985    1700 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0415 17:53:32.652985    1700 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0415 17:53:32.652985    1700 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0415 17:53:32.653527    1700 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0415 17:53:32.653581    1700 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0415 17:53:32.653649    1700 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0415 17:53:32.653649    1700 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0415 17:53:32.653699    1700 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0415 17:53:32.653908    1700 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0415 17:53:32.654539    1700 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem -> 117482.pem in /etc/ssl/certs
	I0415 17:53:32.654539    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem -> /etc/ssl/certs/117482.pem
	I0415 17:53:32.655691    1700 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11748\hosts -> hosts in /etc/test/nested/copy/11748
	I0415 17:53:32.655785    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11748\hosts -> /etc/test/nested/copy/11748/hosts
	I0415 17:53:32.667573    1700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11748
	I0415 17:53:32.686519    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem --> /etc/ssl/certs/117482.pem (1708 bytes)
	I0415 17:53:32.733608    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11748\hosts --> /etc/test/nested/copy/11748/hosts (40 bytes)
	I0415 17:53:32.773197    1700 start.go:296] duration metric: took 484.0831ms for postStartSetup
	I0415 17:53:32.787094    1700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 17:53:32.794679    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:32.967848    1700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
	I0415 17:53:33.084191    1700 command_runner.go:130] > 1%!
	(MISSING)I0415 17:53:33.096582    1700 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 17:53:33.109307    1700 command_runner.go:130] > 952G
	I0415 17:53:33.109307    1700 fix.go:56] duration metric: took 4.0678115s for fixHost
	I0415 17:53:33.109307    1700 start.go:83] releasing machines lock for "functional-662500", held for 4.0684018s
	I0415 17:53:33.119176    1700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-662500
	I0415 17:53:33.280697    1700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 17:53:33.292116    1700 ssh_runner.go:195] Run: cat /version.json
	I0415 17:53:33.294918    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:33.299878    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:33.460380    1700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
	I0415 17:53:33.475560    1700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
	I0415 17:53:33.580617    1700 command_runner.go:130] > {"iso_version": "v1.33.0-1712854267-18621", "kicbase_version": "v0.0.43-1713176859-18634", "minikube_version": "v1.33.0-beta.0", "commit": "0ece0b4c602cbaab0821f0ba2d6ec4a07a392655"}
	I0415 17:53:33.592443    1700 ssh_runner.go:195] Run: systemctl --version
	I0415 17:53:33.733279    1700 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0415 17:53:33.733279    1700 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0415 17:53:33.733279    1700 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0415 17:53:33.745219    1700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 17:53:33.756188    1700 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0415 17:53:33.756188    1700 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0415 17:53:33.756188    1700 command_runner.go:130] > Device: 91h/145d	Inode: 214         Links: 1
	I0415 17:53:33.756188    1700 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0415 17:53:33.756188    1700 command_runner.go:130] > Access: 2024-04-15 17:40:37.653720435 +0000
	I0415 17:53:33.756188    1700 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0415 17:53:33.756188    1700 command_runner.go:130] > Change: 2024-04-15 17:40:05.206023257 +0000
	I0415 17:53:33.756188    1700 command_runner.go:130] >  Birth: 2024-04-15 17:40:05.206023257 +0000
	I0415 17:53:33.768284    1700 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0415 17:53:33.786463    1700 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0415 17:53:33.788134    1700 start.go:438] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0415 17:53:33.800544    1700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 17:53:33.821777    1700 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0415 17:53:33.821812    1700 start.go:494] detecting cgroup driver to use...
	I0415 17:53:33.821874    1700 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 17:53:33.821874    1700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 17:53:33.853389    1700 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0415 17:53:33.869684    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 17:53:33.902977    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 17:53:33.924130    1700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 17:53:33.939212    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 17:53:33.971156    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 17:53:34.006581    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 17:53:34.036827    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 17:53:34.073584    1700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 17:53:34.109126    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 17:53:34.142611    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 17:53:34.180851    1700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 17:53:34.215736    1700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 17:53:34.234335    1700 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0415 17:53:34.249811    1700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 17:53:34.280549    1700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:53:34.457692    1700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 17:53:44.967734    1700 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.5094933s)
	I0415 17:53:44.967734    1700 start.go:494] detecting cgroup driver to use...
	I0415 17:53:44.967734    1700 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 17:53:44.980722    1700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 17:53:45.010859    1700 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0415 17:53:45.010859    1700 command_runner.go:130] > [Unit]
	I0415 17:53:45.010859    1700 command_runner.go:130] > Description=Docker Application Container Engine
	I0415 17:53:45.010859    1700 command_runner.go:130] > Documentation=https://docs.docker.com
	I0415 17:53:45.010859    1700 command_runner.go:130] > BindsTo=containerd.service
	I0415 17:53:45.010973    1700 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0415 17:53:45.010973    1700 command_runner.go:130] > Wants=network-online.target
	I0415 17:53:45.010973    1700 command_runner.go:130] > Requires=docker.socket
	I0415 17:53:45.011023    1700 command_runner.go:130] > StartLimitBurst=3
	I0415 17:53:45.011023    1700 command_runner.go:130] > StartLimitIntervalSec=60
	I0415 17:53:45.011061    1700 command_runner.go:130] > [Service]
	I0415 17:53:45.011061    1700 command_runner.go:130] > Type=notify
	I0415 17:53:45.011061    1700 command_runner.go:130] > Restart=on-failure
	I0415 17:53:45.011061    1700 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0415 17:53:45.011061    1700 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0415 17:53:45.011142    1700 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0415 17:53:45.011142    1700 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0415 17:53:45.011172    1700 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0415 17:53:45.011172    1700 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0415 17:53:45.011172    1700 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0415 17:53:45.011227    1700 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0415 17:53:45.011227    1700 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0415 17:53:45.011264    1700 command_runner.go:130] > ExecStart=
	I0415 17:53:45.011264    1700 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0415 17:53:45.011311    1700 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0415 17:53:45.011311    1700 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0415 17:53:45.011339    1700 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0415 17:53:45.011339    1700 command_runner.go:130] > LimitNOFILE=infinity
	I0415 17:53:45.011339    1700 command_runner.go:130] > LimitNPROC=infinity
	I0415 17:53:45.011339    1700 command_runner.go:130] > LimitCORE=infinity
	I0415 17:53:45.011339    1700 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0415 17:53:45.011339    1700 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0415 17:53:45.011339    1700 command_runner.go:130] > TasksMax=infinity
	I0415 17:53:45.011339    1700 command_runner.go:130] > TimeoutStartSec=0
	I0415 17:53:45.011339    1700 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0415 17:53:45.011339    1700 command_runner.go:130] > Delegate=yes
	I0415 17:53:45.011339    1700 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0415 17:53:45.011339    1700 command_runner.go:130] > KillMode=process
	I0415 17:53:45.011339    1700 command_runner.go:130] > [Install]
	I0415 17:53:45.011339    1700 command_runner.go:130] > WantedBy=multi-user.target
	I0415 17:53:45.011339    1700 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0415 17:53:45.023421    1700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 17:53:45.053828    1700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 17:53:45.105299    1700 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0415 17:53:45.123776    1700 ssh_runner.go:195] Run: which cri-dockerd
	I0415 17:53:45.135973    1700 command_runner.go:130] > /usr/bin/cri-dockerd
	I0415 17:53:45.154438    1700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 17:53:45.175065    1700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 17:53:45.226424    1700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 17:53:45.447053    1700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 17:53:45.604543    1700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 17:53:45.604774    1700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 17:53:45.650237    1700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:53:45.815301    1700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 17:53:46.506396    1700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 17:53:46.539616    1700 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0415 17:53:46.580322    1700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 17:53:46.613837    1700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 17:53:46.762958    1700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 17:53:46.909375    1700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:53:47.049423    1700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 17:53:47.087977    1700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 17:53:47.128400    1700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:53:47.318035    1700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 17:53:47.474825    1700 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 17:53:47.487814    1700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 17:53:47.498423    1700 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0415 17:53:47.498423    1700 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0415 17:53:47.498423    1700 command_runner.go:130] > Device: 9ah/154d	Inode: 721         Links: 1
	I0415 17:53:47.498423    1700 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0415 17:53:47.498423    1700 command_runner.go:130] > Access: 2024-04-15 17:53:47.341686000 +0000
	I0415 17:53:47.498423    1700 command_runner.go:130] > Modify: 2024-04-15 17:53:47.341686000 +0000
	I0415 17:53:47.498423    1700 command_runner.go:130] > Change: 2024-04-15 17:53:47.341686000 +0000
	I0415 17:53:47.499378    1700 command_runner.go:130] >  Birth: -
	I0415 17:53:47.499378    1700 start.go:562] Will wait 60s for crictl version
	I0415 17:53:47.513405    1700 ssh_runner.go:195] Run: which crictl
	I0415 17:53:47.524181    1700 command_runner.go:130] > /usr/bin/crictl
	I0415 17:53:47.537962    1700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 17:53:47.607168    1700 command_runner.go:130] > Version:  0.1.0
	I0415 17:53:47.607168    1700 command_runner.go:130] > RuntimeName:  docker
	I0415 17:53:47.607168    1700 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0415 17:53:47.607168    1700 command_runner.go:130] > RuntimeApiVersion:  v1
	I0415 17:53:47.607168    1700 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0415 17:53:47.616853    1700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 17:53:47.669232    1700 command_runner.go:130] > 26.0.1
	I0415 17:53:47.679480    1700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 17:53:47.732693    1700 command_runner.go:130] > 26.0.1
	I0415 17:53:47.735167    1700 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0415 17:53:47.745422    1700 cli_runner.go:164] Run: docker exec -t functional-662500 dig +short host.docker.internal
	I0415 17:53:48.066001    1700 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0415 17:53:48.077004    1700 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0415 17:53:48.101797    1700 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0415 17:53:48.114393    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:48.286838    1700 kubeadm.go:877] updating cluster {Name:functional-662500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-662500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 17:53:48.286838    1700 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:53:48.295004    1700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 17:53:48.335399    1700 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0415 17:53:48.335399    1700 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0415 17:53:48.335399    1700 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0415 17:53:48.335399    1700 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0415 17:53:48.335399    1700 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0415 17:53:48.335399    1700 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0415 17:53:48.335399    1700 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0415 17:53:48.335399    1700 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 17:53:48.338434    1700 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 17:53:48.338434    1700 docker.go:615] Images already preloaded, skipping extraction
	I0415 17:53:48.347402    1700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 17:53:48.399998    1700 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0415 17:53:48.400035    1700 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0415 17:53:48.400076    1700 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0415 17:53:48.400076    1700 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0415 17:53:48.400076    1700 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0415 17:53:48.400076    1700 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0415 17:53:48.400076    1700 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0415 17:53:48.400076    1700 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 17:53:48.400230    1700 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 17:53:48.400317    1700 cache_images.go:84] Images are preloaded, skipping loading
	I0415 17:53:48.400382    1700 kubeadm.go:928] updating node { 192.168.49.2 8441 v1.29.3 docker true true} ...
	I0415 17:53:48.400568    1700 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-662500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:functional-662500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 17:53:48.411209    1700 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 17:53:48.539464    1700 command_runner.go:130] > cgroupfs
	I0415 17:53:48.539464    1700 cni.go:84] Creating CNI manager for ""
	I0415 17:53:48.539464    1700 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:53:48.539464    1700 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 17:53:48.539464    1700 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-662500 NodeName:functional-662500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 17:53:48.540150    1700 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-662500"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 17:53:48.552168    1700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 17:53:48.600821    1700 command_runner.go:130] > kubeadm
	I0415 17:53:48.600821    1700 command_runner.go:130] > kubectl
	I0415 17:53:48.600821    1700 command_runner.go:130] > kubelet
	I0415 17:53:48.600912    1700 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 17:53:48.619243    1700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 17:53:48.705546    1700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0415 17:53:48.901580    1700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 17:53:49.199150    1700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0415 17:53:49.415622    1700 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0415 17:53:49.497901    1700 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0415 17:53:49.518754    1700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:53:50.137038    1700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 17:53:50.295173    1700 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500 for IP: 192.168.49.2
	I0415 17:53:50.295173    1700 certs.go:194] generating shared ca certs ...
	I0415 17:53:50.295173    1700 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:53:50.296307    1700 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0415 17:53:50.297100    1700 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0415 17:53:50.297298    1700 certs.go:256] generating profile certs ...
	I0415 17:53:50.298331    1700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.key
	I0415 17:53:50.299050    1700 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\apiserver.key.5cb93c24
	I0415 17:53:50.299436    1700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\proxy-client.key
	I0415 17:53:50.299518    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 17:53:50.299896    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0415 17:53:50.300150    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 17:53:50.300501    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 17:53:50.300776    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 17:53:50.300992    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 17:53:50.301277    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 17:53:50.301277    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 17:53:50.302092    1700 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem (1338 bytes)
	W0415 17:53:50.302531    1700 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748_empty.pem, impossibly tiny 0 bytes
	I0415 17:53:50.302798    1700 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0415 17:53:50.303341    1700 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0415 17:53:50.303525    1700 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 17:53:50.304072    1700 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 17:53:50.304775    1700 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem (1708 bytes)
	I0415 17:53:50.304977    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem -> /usr/share/ca-certificates/117482.pem
	I0415 17:53:50.305041    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 17:53:50.305041    1700 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem -> /usr/share/ca-certificates/11748.pem
	I0415 17:53:50.306286    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 17:53:50.503859    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0415 17:53:50.712067    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 17:53:51.004623    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 17:53:51.295688    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 17:53:51.515577    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0415 17:53:51.896006    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 17:53:52.105247    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 17:53:52.311573    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem --> /usr/share/ca-certificates/117482.pem (1708 bytes)
	I0415 17:53:52.518218    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 17:53:52.624784    1700 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem --> /usr/share/ca-certificates/11748.pem (1338 bytes)
	I0415 17:53:52.718030    1700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 17:53:52.817498    1700 ssh_runner.go:195] Run: openssl version
	I0415 17:53:52.827492    1700 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0415 17:53:52.839493    1700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11748.pem && ln -fs /usr/share/ca-certificates/11748.pem /etc/ssl/certs/11748.pem"
	I0415 17:53:52.910558    1700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11748.pem
	I0415 17:53:52.923937    1700 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 15 17:52 /usr/share/ca-certificates/11748.pem
	I0415 17:53:52.923937    1700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:52 /usr/share/ca-certificates/11748.pem
	I0415 17:53:52.939769    1700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11748.pem
	I0415 17:53:52.990157    1700 command_runner.go:130] > 51391683
	I0415 17:53:53.002346    1700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11748.pem /etc/ssl/certs/51391683.0"
	I0415 17:53:53.034518    1700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117482.pem && ln -fs /usr/share/ca-certificates/117482.pem /etc/ssl/certs/117482.pem"
	I0415 17:53:53.107200    1700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117482.pem
	I0415 17:53:53.115194    1700 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 15 17:52 /usr/share/ca-certificates/117482.pem
	I0415 17:53:53.115194    1700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:52 /usr/share/ca-certificates/117482.pem
	I0415 17:53:53.126204    1700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117482.pem
	I0415 17:53:53.191721    1700 command_runner.go:130] > 3ec20f2e
	I0415 17:53:53.204711    1700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117482.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 17:53:53.242858    1700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 17:53:53.325259    1700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 17:53:53.390634    1700 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 15 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0415 17:53:53.390634    1700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0415 17:53:53.404622    1700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 17:53:53.421981    1700 command_runner.go:130] > b5213941
	I0415 17:53:53.442896    1700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 17:53:53.528209    1700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 17:53:53.602186    1700 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 17:53:53.602186    1700 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0415 17:53:53.602186    1700 command_runner.go:130] > Device: 830h/2096d	Inode: 19206       Links: 1
	I0415 17:53:53.602186    1700 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0415 17:53:53.602186    1700 command_runner.go:130] > Access: 2024-04-15 17:52:53.976849049 +0000
	I0415 17:53:53.602186    1700 command_runner.go:130] > Modify: 2024-04-15 17:52:53.976849049 +0000
	I0415 17:53:53.602186    1700 command_runner.go:130] > Change: 2024-04-15 17:52:53.976849049 +0000
	I0415 17:53:53.602186    1700 command_runner.go:130] >  Birth: 2024-04-15 17:52:53.976849049 +0000
	I0415 17:53:53.619721    1700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0415 17:53:53.698156    1700 command_runner.go:130] > Certificate will not expire
	I0415 17:53:53.714755    1700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0415 17:53:53.792220    1700 command_runner.go:130] > Certificate will not expire
	I0415 17:53:53.809739    1700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0415 17:53:53.898321    1700 command_runner.go:130] > Certificate will not expire
	I0415 17:53:53.918207    1700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0415 17:53:54.005861    1700 command_runner.go:130] > Certificate will not expire
	I0415 17:53:54.018520    1700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0415 17:53:54.096770    1700 command_runner.go:130] > Certificate will not expire
	I0415 17:53:54.109845    1700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0415 17:53:54.126171    1700 command_runner.go:130] > Certificate will not expire
	I0415 17:53:54.126171    1700 kubeadm.go:391] StartCluster: {Name:functional-662500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-662500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:53:54.136709    1700 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 17:53:54.316957    1700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 17:53:54.399290    1700 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0415 17:53:54.399391    1700 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0415 17:53:54.399391    1700 command_runner.go:130] > /var/lib/minikube/etcd:
	I0415 17:53:54.399391    1700 command_runner.go:130] > member
	W0415 17:53:54.399391    1700 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0415 17:53:54.399496    1700 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0415 17:53:54.399536    1700 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0415 17:53:54.411124    1700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0415 17:53:54.427124    1700 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0415 17:53:54.435122    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:54.611929    1700 kubeconfig.go:125] found "functional-662500" server: "https://127.0.0.1:51310"
	I0415 17:53:54.613544    1700 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:53:54.613544    1700 kapi.go:59] client config for functional-662500: &rest.Config{Host:"https://127.0.0.1:51310", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-662500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-662500\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22e1600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 17:53:54.614979    1700 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 17:53:54.627969    1700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0415 17:53:54.705549    1700 kubeadm.go:624] The running cluster does not require reconfiguration: 127.0.0.1
	I0415 17:53:54.707016    1700 kubeadm.go:591] duration metric: took 307.4664ms to restartPrimaryControlPlane
	I0415 17:53:54.707061    1700 kubeadm.go:393] duration metric: took 580.8618ms to StartCluster
	I0415 17:53:54.707100    1700 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:53:54.707335    1700 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:53:54.708711    1700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:53:54.710236    1700 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 17:53:54.710172    1700 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 17:53:54.710448    1700 addons.go:69] Setting storage-provisioner=true in profile "functional-662500"
	I0415 17:53:54.710448    1700 addons.go:69] Setting default-storageclass=true in profile "functional-662500"
	I0415 17:53:54.714129    1700 out.go:177] * Verifying Kubernetes components...
	I0415 17:53:54.710522    1700 addons.go:234] Setting addon storage-provisioner=true in "functional-662500"
	I0415 17:53:54.710522    1700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-662500"
	I0415 17:53:54.710755    1700 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	W0415 17:53:54.714208    1700 addons.go:243] addon storage-provisioner should already be in state true
	I0415 17:53:54.714208    1700 host.go:66] Checking if "functional-662500" exists ...
	I0415 17:53:54.734243    1700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 17:53:54.740331    1700 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
	I0415 17:53:54.741645    1700 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
	I0415 17:53:54.910898    1700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 17:53:54.908899    1700 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:53:54.910898    1700 kapi.go:59] client config for functional-662500: &rest.Config{Host:"https://127.0.0.1:51310", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-662500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-662500\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22e1600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 17:53:54.913893    1700 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 17:53:54.913893    1700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 17:53:54.914894    1700 addons.go:234] Setting addon default-storageclass=true in "functional-662500"
	W0415 17:53:54.914894    1700 addons.go:243] addon default-storageclass should already be in state true
	I0415 17:53:54.914894    1700 host.go:66] Checking if "functional-662500" exists ...
	I0415 17:53:54.925890    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:54.935892    1700 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
	I0415 17:53:55.111443    1700 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 17:53:55.111535    1700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 17:53:55.124405    1700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
	I0415 17:53:55.129849    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:55.297598    1700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
	I0415 17:53:55.312242    1700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 17:53:55.417785    1700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-662500
	I0415 17:53:55.438045    1700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 17:53:55.526029    1700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 17:53:55.575903    1700 node_ready.go:35] waiting up to 6m0s for node "functional-662500" to be "Ready" ...
	I0415 17:53:55.575903    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:55.575903    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:55.576431    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:55.576508    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:56.996449    1700 round_trippers.go:574] Response Status: 200 OK in 1419 milliseconds
	I0415 17:53:56.996642    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:56.996642    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0415 17:53:56.996642    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0415 17:53:56.996642    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:56 GMT
	I0415 17:53:56.996642    1700 round_trippers.go:580]     Audit-Id: 62ffc703-b8e7-47fb-908e-c79006457644
	I0415 17:53:56.996642    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:56.996642    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:56.996934    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:53:56.999060    1700 node_ready.go:49] node "functional-662500" has status "Ready":"True"
	I0415 17:53:56.999125    1700 node_ready.go:38] duration metric: took 1.4231547s for node "functional-662500" to be "Ready" ...
	I0415 17:53:56.999125    1700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 17:53:56.999125    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods
	I0415 17:53:56.999125    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:56.999125    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:56.999125    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.102780    1700 round_trippers.go:574] Response Status: 200 OK in 103 milliseconds
	I0415 17:53:57.102911    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.102911    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.102911    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.102911    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.102911    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.103037    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.103037    1700 round_trippers.go:580]     Audit-Id: 80b5e5f1-6aff-40ad-a1d9-9556252cc72c
	I0415 17:53:57.104613    1700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"coredns-76f75df574-4mhcz","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"e691e826-53b3-4913-8044-49462113527f","resourceVersion":"392","creationTimestamp":"2024-04-15T17:53:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"a75528b0-b3dd-42a1-99a4-9919acdc57a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a75528b0-b3dd-42a1-99a4-9919acdc57a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50306 chars]
	I0415 17:53:57.109890    1700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4mhcz" in "kube-system" namespace to be "Ready" ...
	I0415 17:53:57.109890    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4mhcz
	I0415 17:53:57.109890    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:57.109890    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:57.109890    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.199776    1700 round_trippers.go:574] Response Status: 200 OK in 89 milliseconds
	I0415 17:53:57.199867    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.199867    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.199867    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.199867    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.199867    1700 round_trippers.go:580]     Audit-Id: b00e2961-6687-41b8-b73b-0cfc2f5729bb
	I0415 17:53:57.199867    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.199975    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.200287    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-4mhcz","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"e691e826-53b3-4913-8044-49462113527f","resourceVersion":"392","creationTimestamp":"2024-04-15T17:53:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"a75528b0-b3dd-42a1-99a4-9919acdc57a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a75528b0-b3dd-42a1-99a4-9919acdc57a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6239 chars]
	I0415 17:53:57.201435    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:57.201435    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:57.201582    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.201582    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:57.290770    1700 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I0415 17:53:57.290862    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.290862    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.290862    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.290937    1700 round_trippers.go:580]     Audit-Id: 0089043a-2caa-4256-8391-1c9a6bafefa4
	I0415 17:53:57.290937    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.290937    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.290967    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.291433    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:53:57.291958    1700 pod_ready.go:92] pod "coredns-76f75df574-4mhcz" in "kube-system" namespace has status "Ready":"True"
	I0415 17:53:57.292006    1700 pod_ready.go:81] duration metric: took 182.1074ms for pod "coredns-76f75df574-4mhcz" in "kube-system" namespace to be "Ready" ...
	I0415 17:53:57.292006    1700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-662500" in "kube-system" namespace to be "Ready" ...
	I0415 17:53:57.292187    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/etcd-functional-662500
	I0415 17:53:57.292187    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:57.292187    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.292187    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:57.300074    1700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 17:53:57.300074    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.300074    1700 round_trippers.go:580]     Audit-Id: fe1eeaab-a7d1-4151-9cef-359268ae16cd
	I0415 17:53:57.300074    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.300074    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.300074    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.300074    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.300074    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.301082    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-662500","namespace":"kube-system","uid":"cb6ac505-14b1-4680-987d-8374118db5e6","resourceVersion":"267","creationTimestamp":"2024-04-15T17:53:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"afc0bd85e5642afc4ef76d5bfc7ddf78","kubernetes.io/config.mirror":"afc0bd85e5642afc4ef76d5bfc7ddf78","kubernetes.io/config.seen":"2024-04-15T17:52:57.802485065Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6153 chars]
	I0415 17:53:57.301082    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:57.301082    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:57.301082    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:57.301082    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.313843    1700 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0415 17:53:57.313882    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.313882    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.313882    1700 round_trippers.go:580]     Audit-Id: 5d9b4f0f-a9e3-43ac-8a59-d8bd283cbbe5
	I0415 17:53:57.313882    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.313882    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.313882    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.313882    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.314106    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:53:57.314106    1700 pod_ready.go:92] pod "etcd-functional-662500" in "kube-system" namespace has status "Ready":"True"
	I0415 17:53:57.314637    1700 pod_ready.go:81] duration metric: took 22.6305ms for pod "etcd-functional-662500" in "kube-system" namespace to be "Ready" ...
	I0415 17:53:57.314637    1700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-662500" in "kube-system" namespace to be "Ready" ...
	I0415 17:53:57.314760    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:53:57.314844    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:57.314864    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:57.314864    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.321767    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:53:57.321767    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.321767    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.321767    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.321767    1700 round_trippers.go:580]     Audit-Id: c7289e6d-129c-4b71-ac1f-08f437e74bab
	I0415 17:53:57.321767    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.321767    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.321767    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.322757    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:53:57.322757    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:57.322757    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:57.322757    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:57.322757    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.402041    1700 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I0415 17:53:57.402134    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.402184    1700 round_trippers.go:580]     Audit-Id: d54d00ca-9adc-4f49-b7c2-eb4df321d468
	I0415 17:53:57.402184    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.402184    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.402184    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.402184    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.402184    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.402825    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:53:57.815721    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:53:57.815782    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:57.815782    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.815808    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:57.822796    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:53:57.822796    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.822796    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.822796    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.822796    1700 round_trippers.go:580]     Audit-Id: 51080538-8540-49c6-80be-675c6462a10a
	I0415 17:53:57.822796    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.822796    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.822796    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.823504    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:53:57.824682    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:57.824682    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:57.824682    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:57.824682    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:57.831098    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:53:57.831098    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:57.831098    1700 round_trippers.go:580]     Audit-Id: 1e20a915-2f51-42db-95ad-581dc5521237
	I0415 17:53:57.831098    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:57.831098    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:57.831098    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:57.831253    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:57.831253    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:57 GMT
	I0415 17:53:57.831634    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:53:58.311842    1700 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0415 17:53:58.311842    1700 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0415 17:53:58.311842    1700 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0415 17:53:58.311842    1700 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0415 17:53:58.311842    1700 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0415 17:53:58.311842    1700 command_runner.go:130] > pod/storage-provisioner configured
	I0415 17:53:58.311842    1700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.8736615s)
	I0415 17:53:58.311842    1700 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0415 17:53:58.311842    1700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.785682s)
	I0415 17:53:58.311842    1700 round_trippers.go:463] GET https://127.0.0.1:51310/apis/storage.k8s.io/v1/storageclasses
	I0415 17:53:58.311842    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:58.311842    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:58.311842    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:58.316973    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:53:58.316973    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:53:58.317047    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:58.317047    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:58.317047    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:58.316973    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:58.317047    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:58.317047    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:58.317047    1700 round_trippers.go:580]     Content-Length: 1273
	I0415 17:53:58.317047    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:58 GMT
	I0415 17:53:58.317047    1700 round_trippers.go:580]     Audit-Id: 469f01a9-fb49-4b71-92aa-3df0cbf3016b
	I0415 17:53:58.317047    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:58.317047    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:58.317047    1700 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"standard","uid":"8111c379-506c-49b2-8dda-67a23d956910","resourceVersion":"351","creationTimestamp":"2024-04-15T17:53:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0415 17:53:58.317784    1700 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8111c379-506c-49b2-8dda-67a23d956910","resourceVersion":"351","creationTimestamp":"2024-04-15T17:53:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0415 17:53:58.317784    1700 round_trippers.go:463] PUT https://127.0.0.1:51310/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 17:53:58.317784    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:58.317784    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:58.317784    1700 round_trippers.go:473]     Content-Type: application/json
	I0415 17:53:58.317784    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:58.321053    1700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 17:53:58.321053    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:58.321053    1700 round_trippers.go:580]     Audit-Id: ce6cab8f-616f-4855-a85e-88c8bd69f7b9
	I0415 17:53:58.321053    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:58.321053    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:58.321053    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:58.321053    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:58.321053    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:58 GMT
	I0415 17:53:58.321671    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:53:58.322885    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:58.322946    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:58.322946    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:58.323007    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:58.323259    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:53:58.324256    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:58.324256    1700 round_trippers.go:580]     Audit-Id: a857af40-1fa7-4256-b99c-096ce204abcb
	I0415 17:53:58.324256    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:58.324256    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:58.324256    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:58.324256    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:58.324256    1700 round_trippers.go:580]     Content-Length: 1220
	I0415 17:53:58.324256    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:58 GMT
	I0415 17:53:58.324256    1700 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8111c379-506c-49b2-8dda-67a23d956910","resourceVersion":"351","creationTimestamp":"2024-04-15T17:53:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-15T17:53:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0415 17:53:58.328180    1700 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 17:53:58.330303    1700 addons.go:505] duration metric: took 3.6199601s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 17:53:58.330303    1700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 17:53:58.330303    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:58.330303    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:58.330303    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:58.330303    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:58.330303    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:58.330303    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:58 GMT
	I0415 17:53:58.330303    1700 round_trippers.go:580]     Audit-Id: 40707409-c741-416d-ba47-f3477323550f
	I0415 17:53:58.330303    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:53:58.820599    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:53:58.820599    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:58.820599    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:58.820599    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:58.828408    1700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 17:53:58.828408    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:58.828408    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:58.828408    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:58.828408    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:58 GMT
	I0415 17:53:58.828408    1700 round_trippers.go:580]     Audit-Id: d8bc4d86-097e-42b1-bace-cbcb9fcf0628
	I0415 17:53:58.828408    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:58.828408    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:58.829000    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:53:58.829616    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:58.829616    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:58.829616    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:58.829616    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:58.837242    1700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 17:53:58.837242    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:58.837242    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:58.837242    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:58 GMT
	I0415 17:53:58.837242    1700 round_trippers.go:580]     Audit-Id: 108df785-48d7-42df-9253-d26919c4d46a
	I0415 17:53:58.837242    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:58.837242    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:58.837242    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:58.837242    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:53:59.326468    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:53:59.326703    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:59.326703    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:59.326703    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:59.336624    1700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0415 17:53:59.336624    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:59.336624    1700 round_trippers.go:580]     Audit-Id: 0e5041f8-2b85-49ce-ad31-7d3faa2b9766
	I0415 17:53:59.336624    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:59.336624    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:59.336624    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:59.336624    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:59.336624    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:59 GMT
	I0415 17:53:59.337287    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:53:59.341685    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:59.341685    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:59.341685    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:59.341685    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:59.397477    1700 round_trippers.go:574] Response Status: 200 OK in 55 milliseconds
	I0415 17:53:59.397477    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:59.397566    1700 round_trippers.go:580]     Audit-Id: 9496d610-dae3-4c63-96f5-50e5334a69ac
	I0415 17:53:59.397566    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:59.397566    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:59.397566    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:59.397566    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:59.397566    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:59 GMT
	I0415 17:53:59.397779    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:53:59.398327    1700 pod_ready.go:102] pod "kube-apiserver-functional-662500" in "kube-system" namespace has status "Ready":"False"
	I0415 17:53:59.830137    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:53:59.830137    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:59.830137    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:59.830243    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:59.835667    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:53:59.835667    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:59.835667    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:59.835667    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:59 GMT
	I0415 17:53:59.835667    1700 round_trippers.go:580]     Audit-Id: 6180e8e0-ec3c-4c86-9bf0-b06f72de4220
	I0415 17:53:59.835667    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:59.835667    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:59.835667    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:59.835667    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:53:59.836673    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:53:59.836673    1700 round_trippers.go:469] Request Headers:
	I0415 17:53:59.836673    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:53:59.836673    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:53:59.841657    1700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 17:53:59.841657    1700 round_trippers.go:577] Response Headers:
	I0415 17:53:59.841657    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:53:59 GMT
	I0415 17:53:59.841657    1700 round_trippers.go:580]     Audit-Id: f0a994af-0c01-4e2a-8226-ce257449c84d
	I0415 17:53:59.841657    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:53:59.841657    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:53:59.841657    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:53:59.841657    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:53:59.841657    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:00.329751    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:54:00.329812    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:00.329812    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:00.329812    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:00.336496    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:00.336564    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:00.336626    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:00.336626    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:00.336626    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:00 GMT
	I0415 17:54:00.336626    1700 round_trippers.go:580]     Audit-Id: 90f77bb6-351a-4258-8eec-ecf11e2842f6
	I0415 17:54:00.336626    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:00.336626    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:00.336959    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:54:00.337172    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:00.337172    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:00.337172    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:00.337172    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:00.342189    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:00.342189    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:00.342189    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:00.342189    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:00.342189    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:00.342189    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:00.342189    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:00 GMT
	I0415 17:54:00.342189    1700 round_trippers.go:580]     Audit-Id: 6f336635-d5da-4ecb-a4b3-0fd2887cea11
	I0415 17:54:00.342874    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:00.826402    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:54:00.826503    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:00.826503    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:00.826503    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:00.831954    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:00.831954    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:00.831954    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:00.831954    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:00 GMT
	I0415 17:54:00.831954    1700 round_trippers.go:580]     Audit-Id: 1adb8774-68f6-47c6-83a0-f7f8c83ecca7
	I0415 17:54:00.831954    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:00.831954    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:00.831954    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:00.833171    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:54:00.833806    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:00.833806    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:00.833806    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:00.833806    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:00.840105    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:00.840105    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:00.840105    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:00.840105    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:00.840105    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:00.840105    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:00.840105    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:00 GMT
	I0415 17:54:00.840105    1700 round_trippers.go:580]     Audit-Id: 83a57e41-818b-4577-81ff-2cacb5ad2991
	I0415 17:54:00.840105    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:01.327118    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:54:01.327195    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:01.327195    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:01.327272    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:01.332999    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:01.332999    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:01.332999    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:01.332999    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:01.332999    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:01 GMT
	I0415 17:54:01.332999    1700 round_trippers.go:580]     Audit-Id: e840fb1c-109d-4689-9306-048667e4d9ad
	I0415 17:54:01.332999    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:01.332999    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:01.332999    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:54:01.334915    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:01.334915    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:01.334915    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:01.334915    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:01.340991    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:01.341080    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:01.341080    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:01 GMT
	I0415 17:54:01.341080    1700 round_trippers.go:580]     Audit-Id: 7e6e3e1e-26db-4df9-ac4e-ad7077f7c312
	I0415 17:54:01.341080    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:01.341080    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:01.341080    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:01.341080    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:01.341080    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:01.825424    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:54:01.825424    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:01.825424    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:01.825424    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:01.831590    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:01.831840    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:01.831840    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:01.831840    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:01.831840    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:01.831840    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:01.831840    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:01 GMT
	I0415 17:54:01.831840    1700 round_trippers.go:580]     Audit-Id: e34a432b-f24f-4e55-84ec-7b9500db48e7
	I0415 17:54:01.831840    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:54:01.832730    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:01.832730    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:01.832730    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:01.832730    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:01.838770    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:01.838770    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:01.838770    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:01.838770    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:01.838770    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:01.838770    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:01.838770    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:01 GMT
	I0415 17:54:01.838770    1700 round_trippers.go:580]     Audit-Id: 4b4df82d-94be-4e8c-9ae1-bf36bb581116
	I0415 17:54:01.838770    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:01.839900    1700 pod_ready.go:102] pod "kube-apiserver-functional-662500" in "kube-system" namespace has status "Ready":"False"
	I0415 17:54:02.322918    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:54:02.322918    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:02.323105    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:02.323105    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:02.337099    1700 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0415 17:54:02.337258    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:02.337258    1700 round_trippers.go:580]     Audit-Id: 83022220-152a-459b-a53f-5ae9d50b0441
	I0415 17:54:02.337258    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:02.337258    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:02.337258    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:02.337258    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:02.337258    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:02 GMT
	I0415 17:54:02.337524    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:54:02.338240    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:02.338240    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:02.338240    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:02.338240    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:02.344445    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:02.344445    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:02.344445    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:02.344445    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:02 GMT
	I0415 17:54:02.344445    1700 round_trippers.go:580]     Audit-Id: 16a6285f-cec2-451d-bebe-f6d5c82bb920
	I0415 17:54:02.344445    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:02.344445    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:02.344445    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:02.344445    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:02.819199    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:54:02.819281    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:02.819281    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:02.819281    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:02.824968    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:02.824995    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:02.824995    1700 round_trippers.go:580]     Audit-Id: 81acd0f3-1963-4004-a0b3-cc4ba6fb8b91
	I0415 17:54:02.824995    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:02.824995    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:02.824995    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:02.824995    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:02.825063    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:02 GMT
	I0415 17:54:02.825411    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:54:02.826092    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:02.826092    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:02.826092    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:02.826092    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:02.831873    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:02.831873    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:02.831873    1700 round_trippers.go:580]     Audit-Id: f6f33636-a5da-49c7-8249-a7e3fb77e0a9
	I0415 17:54:02.831873    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:02.831873    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:02.831873    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:02.831873    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:02.831873    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:02 GMT
	I0415 17:54:02.831873    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:03.318775    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:54:03.318775    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.318775    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.318775    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.327023    1700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 17:54:03.327023    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.327023    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.327023    1700 round_trippers.go:580]     Audit-Id: 054c446a-89ac-4ad0-ae9d-2f83a4fa1841
	I0415 17:54:03.327023    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.327023    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.327023    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.327023    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.327023    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"415","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0415 17:54:03.328069    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:03.328607    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.328607    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.328607    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.337144    1700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 17:54:03.337144    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.337144    1700 round_trippers.go:580]     Audit-Id: 8625e73f-bb1f-49e8-a67d-a0da46d99d7d
	I0415 17:54:03.337144    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.337144    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.337144    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.337144    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.337144    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.337144    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:03.820952    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500
	I0415 17:54:03.820952    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.820952    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.820952    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.826204    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:03.826204    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.826204    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.826204    1700 round_trippers.go:580]     Audit-Id: 187747fe-fdeb-45e1-b9b5-4b1771400ce9
	I0415 17:54:03.826204    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.826204    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.826204    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.826204    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.826741    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-662500","namespace":"kube-system","uid":"452b972b-481c-47cc-a4ff-5017115e59b8","resourceVersion":"482","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.mirror":"cf22128e1f7ff41b8f2626e32eecb6cb","kubernetes.io/config.seen":"2024-04-15T17:53:08.005760848Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8742 chars]
	I0415 17:54:03.827414    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:03.827498    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.827498    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.827498    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.833751    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:03.833751    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.833751    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.833751    1700 round_trippers.go:580]     Audit-Id: 01f889b9-1aaa-4d7e-b9ca-56714be74219
	I0415 17:54:03.833751    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.833751    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.833751    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.833751    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.834474    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:03.834474    1700 pod_ready.go:92] pod "kube-apiserver-functional-662500" in "kube-system" namespace has status "Ready":"True"
	I0415 17:54:03.834474    1700 pod_ready.go:81] duration metric: took 6.5195302s for pod "kube-apiserver-functional-662500" in "kube-system" namespace to be "Ready" ...
	I0415 17:54:03.834474    1700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-662500" in "kube-system" namespace to be "Ready" ...
	I0415 17:54:03.834474    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-662500
	I0415 17:54:03.835018    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.835018    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.835018    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.839854    1700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 17:54:03.840682    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.840682    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.840682    1700 round_trippers.go:580]     Audit-Id: 5690200d-bebd-4600-b234-8e0747aa27c1
	I0415 17:54:03.840682    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.840682    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.840682    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.840682    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.840682    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-662500","namespace":"kube-system","uid":"18238d9b-10a8-43bb-9f6b-4a0c27e6b107","resourceVersion":"477","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f213e110514642074fa794224f2ddd8f","kubernetes.io/config.mirror":"f213e110514642074fa794224f2ddd8f","kubernetes.io/config.seen":"2024-04-15T17:53:08.005763348Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8315 chars]
	I0415 17:54:03.842036    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:03.842081    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.842081    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.842081    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.846040    1700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 17:54:03.846862    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.846862    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.846862    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.846862    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.846933    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.846964    1700 round_trippers.go:580]     Audit-Id: 66ca2848-1fae-4cf3-8fd9-353b96bf8593
	I0415 17:54:03.846964    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.846964    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:03.847629    1700 pod_ready.go:92] pod "kube-controller-manager-functional-662500" in "kube-system" namespace has status "Ready":"True"
	I0415 17:54:03.847629    1700 pod_ready.go:81] duration metric: took 13.1542ms for pod "kube-controller-manager-functional-662500" in "kube-system" namespace to be "Ready" ...
	I0415 17:54:03.847629    1700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ppj2k" in "kube-system" namespace to be "Ready" ...
	I0415 17:54:03.847629    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-proxy-ppj2k
	I0415 17:54:03.847629    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.847629    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.847629    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.852322    1700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 17:54:03.852366    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.852366    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.852366    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.852366    1700 round_trippers.go:580]     Audit-Id: 2ea4d3a4-fca8-4af8-89a1-78e86230f28e
	I0415 17:54:03.852366    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.852464    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.852464    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.852684    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ppj2k","generateName":"kube-proxy-","namespace":"kube-system","uid":"8589b092-fe36-43c4-8c49-c3a4031b4e30","resourceVersion":"417","creationTimestamp":"2024-04-15T17:53:21Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"54e4a0e4-abd5-46db-b3c6-dd49021c3b8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54e4a0e4-abd5-46db-b3c6-dd49021c3b8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6030 chars]
	I0415 17:54:03.852736    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:03.852736    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.852736    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.852736    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.859492    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:03.859492    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.859492    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.859492    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.859492    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.859492    1700 round_trippers.go:580]     Audit-Id: e87c3f2b-292b-4423-a541-d3865f71847b
	I0415 17:54:03.859492    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.859492    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.860440    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:03.860440    1700 pod_ready.go:92] pod "kube-proxy-ppj2k" in "kube-system" namespace has status "Ready":"True"
	I0415 17:54:03.860440    1700 pod_ready.go:81] duration metric: took 12.8101ms for pod "kube-proxy-ppj2k" in "kube-system" namespace to be "Ready" ...
	I0415 17:54:03.860440    1700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-662500" in "kube-system" namespace to be "Ready" ...
	I0415 17:54:03.860440    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:03.860440    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.860440    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.860440    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.865435    1700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 17:54:03.865435    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.865794    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.865794    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.865794    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.865794    1700 round_trippers.go:580]     Audit-Id: a1ba59e4-897a-4a4b-8b2b-8c1f8a381c94
	I0415 17:54:03.865794    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.865794    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.865794    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:03.866374    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:03.866464    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:03.866464    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:03.866464    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:03.870703    1700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 17:54:03.870703    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:03.870703    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:03.870703    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:03.870703    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:03.870703    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:03 GMT
	I0415 17:54:03.870703    1700 round_trippers.go:580]     Audit-Id: 8bdfab2d-7516-417b-9d2c-f9c18cfb1d35
	I0415 17:54:03.870703    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:03.871791    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:04.365877    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:04.365877    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:04.365877    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:04.365877    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:04.371403    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:04.371403    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:04.371403    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:04 GMT
	I0415 17:54:04.371403    1700 round_trippers.go:580]     Audit-Id: e4dfe1bc-0919-49ca-a78c-5b37c1dbac25
	I0415 17:54:04.371403    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:04.371403    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:04.371403    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:04.371403    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:04.372103    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:04.372636    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:04.372636    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:04.372636    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:04.372636    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:04.379311    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:04.379311    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:04.379311    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:04.379311    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:04.379311    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:04.379311    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:04 GMT
	I0415 17:54:04.379311    1700 round_trippers.go:580]     Audit-Id: 0928f578-b6b6-489d-b5e1-262e492e1899
	I0415 17:54:04.379311    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:04.379311    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:04.864038    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:04.864271    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:04.864271    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:04.864271    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:04.870344    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:04.870458    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:04.870458    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:04.870496    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:04 GMT
	I0415 17:54:04.870496    1700 round_trippers.go:580]     Audit-Id: d85ee427-eb0e-4327-a462-666a08b1e947
	I0415 17:54:04.870496    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:04.870496    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:04.870496    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:04.870893    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:04.871098    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:04.871098    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:04.871639    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:04.871639    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:04.878493    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:04.878493    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:04.878493    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:04 GMT
	I0415 17:54:04.878557    1700 round_trippers.go:580]     Audit-Id: c104b0ba-2ab5-4730-be17-9b369cc512ad
	I0415 17:54:04.878557    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:04.878557    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:04.878557    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:04.878557    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:04.878729    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:05.367333    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:05.367333    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:05.367415    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:05.367415    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:05.373371    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:05.373371    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:05.373371    1700 round_trippers.go:580]     Audit-Id: 8de55d46-0c4a-4d3b-bb62-9598328ccc04
	I0415 17:54:05.373371    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:05.373371    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:05.373371    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:05.373371    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:05.373371    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:05 GMT
	I0415 17:54:05.373925    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:05.374917    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:05.374917    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:05.375042    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:05.375042    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:05.381364    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:05.381364    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:05.381364    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:05.381364    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:05.381364    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:05.381364    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:05.381364    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:05 GMT
	I0415 17:54:05.381364    1700 round_trippers.go:580]     Audit-Id: 70e008c1-46ec-4b00-95aa-99ef3ba9f125
	I0415 17:54:05.382079    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:05.866368    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:05.866368    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:05.866368    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:05.866368    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:05.873907    1700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 17:54:05.873907    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:05.873907    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:05.873907    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:05 GMT
	I0415 17:54:05.873907    1700 round_trippers.go:580]     Audit-Id: 17c96d39-2d49-4ec1-bef7-afd701be0163
	I0415 17:54:05.873907    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:05.873907    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:05.873907    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:05.874509    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:05.875131    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:05.875131    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:05.875226    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:05.875226    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:05.881398    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:05.881398    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:05.881398    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:05.881398    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:05.881398    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:05 GMT
	I0415 17:54:05.881398    1700 round_trippers.go:580]     Audit-Id: aa248b95-8eef-4b04-ac34-4ce60d72d880
	I0415 17:54:05.881398    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:05.881398    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:05.882041    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:05.882802    1700 pod_ready.go:102] pod "kube-scheduler-functional-662500" in "kube-system" namespace has status "Ready":"False"
	I0415 17:54:06.363958    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:06.364151    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:06.364151    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:06.364151    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:06.370037    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:06.370037    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:06.370037    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:06.370037    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:06.370037    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:06.370037    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:06.370037    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:06 GMT
	I0415 17:54:06.370037    1700 round_trippers.go:580]     Audit-Id: 3a0232e6-b919-4ec3-a354-960abbef005e
	I0415 17:54:06.370804    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:06.371423    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:06.371423    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:06.371423    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:06.371423    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:06.377436    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:06.377436    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:06.377436    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:06.377436    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:06 GMT
	I0415 17:54:06.377436    1700 round_trippers.go:580]     Audit-Id: 9a244546-579a-4808-b535-9d956463a701
	I0415 17:54:06.377436    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:06.377436    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:06.377436    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:06.379132    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:06.863365    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:06.863365    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:06.863448    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:06.863448    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:06.868890    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:06.868890    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:06.868890    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:06.868890    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:06 GMT
	I0415 17:54:06.868890    1700 round_trippers.go:580]     Audit-Id: 06ffb3dd-e838-43d9-b6a5-9dcadce9110a
	I0415 17:54:06.868998    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:06.868998    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:06.868998    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:06.869265    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:06.869942    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:06.870067    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:06.870067    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:06.870067    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:06.875415    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:06.875415    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:06.875415    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:06 GMT
	I0415 17:54:06.875415    1700 round_trippers.go:580]     Audit-Id: 5fde37c7-aff6-415e-9dc3-e001d05b34bd
	I0415 17:54:06.875415    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:06.875415    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:06.875415    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:06.875415    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:06.875964    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:07.375498    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:07.375498    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:07.375498    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:07.375498    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:07.380890    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:07.380890    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:07.380890    1700 round_trippers.go:580]     Audit-Id: 69039a8b-e7bb-4b27-b50c-ac1737354329
	I0415 17:54:07.380890    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:07.380890    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:07.380890    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:07.380890    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:07.380890    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:07 GMT
	I0415 17:54:07.381559    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:07.381781    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:07.381781    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:07.381781    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:07.381781    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:07.395247    1700 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0415 17:54:07.395247    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:07.395247    1700 round_trippers.go:580]     Audit-Id: 6f24bf72-10ac-4f18-a753-fea51ad3e527
	I0415 17:54:07.395247    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:07.395247    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:07.395247    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:07.395247    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:07.395247    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:07 GMT
	I0415 17:54:07.395984    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:07.875706    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:07.875706    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:07.875706    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:07.875706    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:07.882635    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:07.882681    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:07.882681    1700 round_trippers.go:580]     Audit-Id: 44f984f6-152b-4919-a8bd-c2b32ea0df27
	I0415 17:54:07.882681    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:07.882681    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:07.882681    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:07.882681    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:07.882681    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:07 GMT
	I0415 17:54:07.882681    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:07.883367    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:07.883448    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:07.883448    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:07.883448    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:07.890354    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:07.890383    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:07.890383    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:07.890383    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:07 GMT
	I0415 17:54:07.890383    1700 round_trippers.go:580]     Audit-Id: 652255b4-3486-4644-b521-b098263529fd
	I0415 17:54:07.890383    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:07.890383    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:07.890383    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:07.890383    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:07.891215    1700 pod_ready.go:102] pod "kube-scheduler-functional-662500" in "kube-system" namespace has status "Ready":"False"
	I0415 17:54:08.361829    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:08.362057    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:08.362057    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:08.362057    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:08.368292    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:08.368404    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:08.368404    1700 round_trippers.go:580]     Audit-Id: 658bd4eb-b221-4356-a052-ccc01dfc51f5
	I0415 17:54:08.368404    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:08.368404    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:08.368404    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:08.368404    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:08.368506    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:08 GMT
	I0415 17:54:08.368863    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:08.369490    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:08.369490    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:08.369579    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:08.369579    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:08.376641    1700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 17:54:08.376734    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:08.376734    1700 round_trippers.go:580]     Audit-Id: 8eecc85c-035d-4739-aa65-5ad93e741d11
	I0415 17:54:08.376734    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:08.376734    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:08.376734    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:08.376734    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:08.376734    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:08 GMT
	I0415 17:54:08.376916    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:08.861822    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:08.861904    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:08.861904    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:08.861904    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:08.867965    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:08.867965    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:08.867965    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:08 GMT
	I0415 17:54:08.867965    1700 round_trippers.go:580]     Audit-Id: ce34aef7-847a-47b6-98cb-3df68a4f145f
	I0415 17:54:08.868073    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:08.868073    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:08.868073    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:08.868073    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:08.868303    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:08.868842    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:08.868957    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:08.868957    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:08.868957    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:08.874645    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:08.874645    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:08.874645    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:08 GMT
	I0415 17:54:08.874645    1700 round_trippers.go:580]     Audit-Id: 6149f46d-00b6-4bf4-8ed5-ba0096b178da
	I0415 17:54:08.874645    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:08.874645    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:08.875198    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:08.875198    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:08.875408    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:09.364027    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:09.364027    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:09.364132    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:09.364132    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:09.373508    1700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0415 17:54:09.373508    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:09.373508    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:09.373508    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:09.374141    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:09.374141    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:09 GMT
	I0415 17:54:09.374141    1700 round_trippers.go:580]     Audit-Id: 1d563929-0148-4072-a068-fb533adb725f
	I0415 17:54:09.374141    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:09.374141    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:09.374800    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:09.374800    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:09.374800    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:09.374800    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:09.392717    1700 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0415 17:54:09.392717    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:09.392717    1700 round_trippers.go:580]     Audit-Id: ac732c07-bab6-47ae-8fbb-8f05001e1c23
	I0415 17:54:09.392717    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:09.392717    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:09.392717    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:09.392717    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:09.392717    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:09 GMT
	I0415 17:54:09.392717    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:09.863304    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:09.863377    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:09.863377    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:09.863377    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:09.868246    1700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 17:54:09.868246    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:09.868246    1700 round_trippers.go:580]     Audit-Id: a83d7947-d0fa-4f06-b62e-0cfe2cccdaaa
	I0415 17:54:09.868246    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:09.868246    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:09.868246    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:09.868246    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:09.868246    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:09 GMT
	I0415 17:54:09.868788    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"416","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0415 17:54:09.869390    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:09.869390    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:09.869390    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:09.869390    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:09.874864    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:09.874864    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:09.874864    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:09 GMT
	I0415 17:54:09.874864    1700 round_trippers.go:580]     Audit-Id: 688f9804-0d30-4cab-b735-d830af51bc46
	I0415 17:54:09.874864    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:09.874864    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:09.875409    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:09.875409    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:09.875765    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:10.362673    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500
	I0415 17:54:10.362673    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:10.362673    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:10.362673    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:10.371335    1700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 17:54:10.371335    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:10.371442    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:10.371442    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:10.371482    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:10.371573    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:10 GMT
	I0415 17:54:10.371688    1700 round_trippers.go:580]     Audit-Id: c18870df-57e0-496a-8103-b9c6454e6a34
	I0415 17:54:10.371688    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:10.372293    1700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-662500","namespace":"kube-system","uid":"7d236e19-c3b6-4344-a5d9-2b84c530e5a9","resourceVersion":"493","creationTimestamp":"2024-04-15T17:53:08Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.mirror":"9d3337215b06a22725acdc065643e199","kubernetes.io/config.seen":"2024-04-15T17:53:08.005765149Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0415 17:54:10.372851    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes/functional-662500
	I0415 17:54:10.372851    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:10.372851    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:10.372851    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:10.383324    1700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0415 17:54:10.383324    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:10.383324    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:10.383324    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:10 GMT
	I0415 17:54:10.383324    1700 round_trippers.go:580]     Audit-Id: d63e7a29-f70d-4f48-b6df-57a6c010d4c1
	I0415 17:54:10.383324    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:10.383324    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:10.383324    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:10.383324    1700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-04-15T17:53:03Z","fieldsType":"Fie [truncated 4862 chars]
	I0415 17:54:10.384188    1700 pod_ready.go:92] pod "kube-scheduler-functional-662500" in "kube-system" namespace has status "Ready":"True"
	I0415 17:54:10.384188    1700 pod_ready.go:81] duration metric: took 6.5234405s for pod "kube-scheduler-functional-662500" in "kube-system" namespace to be "Ready" ...
	I0415 17:54:10.384188    1700 pod_ready.go:38] duration metric: took 13.3844322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 17:54:10.384298    1700 api_server.go:52] waiting for apiserver process to appear ...
	I0415 17:54:10.395795    1700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 17:54:10.420329    1700 command_runner.go:130] > 5788
	I0415 17:54:10.420329    1700 api_server.go:72] duration metric: took 15.7092045s to wait for apiserver process to appear ...
	I0415 17:54:10.421142    1700 api_server.go:88] waiting for apiserver healthz status ...
	I0415 17:54:10.421142    1700 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51310/healthz ...
	I0415 17:54:10.434494    1700 api_server.go:279] https://127.0.0.1:51310/healthz returned 200:
	ok
	I0415 17:54:10.434494    1700 round_trippers.go:463] GET https://127.0.0.1:51310/version
	I0415 17:54:10.434494    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:10.434494    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:10.434494    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:10.438001    1700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 17:54:10.438001    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:10.438001    1700 round_trippers.go:580]     Content-Length: 263
	I0415 17:54:10.438001    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:10 GMT
	I0415 17:54:10.438001    1700 round_trippers.go:580]     Audit-Id: f197c521-7003-439b-b56b-b650ba00d12e
	I0415 17:54:10.438001    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:10.438001    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:10.438001    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:10.438001    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:10.438001    1700 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0415 17:54:10.438595    1700 api_server.go:141] control plane version: v1.29.3
	I0415 17:54:10.438595    1700 api_server.go:131] duration metric: took 17.4519ms to wait for apiserver health ...
	I0415 17:54:10.438595    1700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 17:54:10.438595    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods
	I0415 17:54:10.438595    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:10.438595    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:10.438595    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:10.447132    1700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 17:54:10.447132    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:10.447132    1700 round_trippers.go:580]     Audit-Id: 15050ce9-1857-4801-8d96-3072b1c9e871
	I0415 17:54:10.447132    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:10.447132    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:10.447132    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:10.447132    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:10.447547    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:10 GMT
	I0415 17:54:10.448446    1700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-76f75df574-4mhcz","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"e691e826-53b3-4913-8044-49462113527f","resourceVersion":"480","creationTimestamp":"2024-04-15T17:53:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"a75528b0-b3dd-42a1-99a4-9919acdc57a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a75528b0-b3dd-42a1-99a4-9919acdc57a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52145 chars]
	I0415 17:54:10.450629    1700 system_pods.go:59] 7 kube-system pods found
	I0415 17:54:10.450629    1700 system_pods.go:61] "coredns-76f75df574-4mhcz" [e691e826-53b3-4913-8044-49462113527f] Running
	I0415 17:54:10.450629    1700 system_pods.go:61] "etcd-functional-662500" [cb6ac505-14b1-4680-987d-8374118db5e6] Running
	I0415 17:54:10.450629    1700 system_pods.go:61] "kube-apiserver-functional-662500" [452b972b-481c-47cc-a4ff-5017115e59b8] Running
	I0415 17:54:10.450629    1700 system_pods.go:61] "kube-controller-manager-functional-662500" [18238d9b-10a8-43bb-9f6b-4a0c27e6b107] Running
	I0415 17:54:10.450629    1700 system_pods.go:61] "kube-proxy-ppj2k" [8589b092-fe36-43c4-8c49-c3a4031b4e30] Running
	I0415 17:54:10.450629    1700 system_pods.go:61] "kube-scheduler-functional-662500" [7d236e19-c3b6-4344-a5d9-2b84c530e5a9] Running
	I0415 17:54:10.450629    1700 system_pods.go:61] "storage-provisioner" [c2731567-18b5-4c30-9bfe-257c96aa88e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0415 17:54:10.450629    1700 system_pods.go:74] duration metric: took 12.0337ms to wait for pod list to return data ...
	I0415 17:54:10.450629    1700 default_sa.go:34] waiting for default service account to be created ...
	I0415 17:54:10.450629    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/default/serviceaccounts
	I0415 17:54:10.450629    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:10.450629    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:10.450629    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:10.459177    1700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 17:54:10.459177    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:10.459177    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:10.459177    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:10.459177    1700 round_trippers.go:580]     Content-Length: 261
	I0415 17:54:10.459177    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:10 GMT
	I0415 17:54:10.459177    1700 round_trippers.go:580]     Audit-Id: 98fbae8f-3123-4a67-9605-1003d119e639
	I0415 17:54:10.459177    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:10.459177    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:10.459177    1700 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"8916e5b4-7df2-4159-bb9e-f5acbf1f3440","resourceVersion":"303","creationTimestamp":"2024-04-15T17:53:20Z"}}]}
	I0415 17:54:10.459177    1700 default_sa.go:45] found service account: "default"
	I0415 17:54:10.459177    1700 default_sa.go:55] duration metric: took 8.5481ms for default service account to be created ...
	I0415 17:54:10.459177    1700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 17:54:10.459177    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/namespaces/kube-system/pods
	I0415 17:54:10.460230    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:10.460230    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:10.460288    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:10.465569    1700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 17:54:10.465900    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:10.465900    1700 round_trippers.go:580]     Audit-Id: 4ebd2cbf-d526-4d48-a920-a79b3ebc89f0
	I0415 17:54:10.465900    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:10.465900    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:10.465900    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:10.465948    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:10.465948    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:10 GMT
	I0415 17:54:10.467941    1700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-76f75df574-4mhcz","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"e691e826-53b3-4913-8044-49462113527f","resourceVersion":"480","creationTimestamp":"2024-04-15T17:53:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"a75528b0-b3dd-42a1-99a4-9919acdc57a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-15T17:53:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a75528b0-b3dd-42a1-99a4-9919acdc57a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52145 chars]
	I0415 17:54:10.473871    1700 system_pods.go:86] 7 kube-system pods found
	I0415 17:54:10.473871    1700 system_pods.go:89] "coredns-76f75df574-4mhcz" [e691e826-53b3-4913-8044-49462113527f] Running
	I0415 17:54:10.473871    1700 system_pods.go:89] "etcd-functional-662500" [cb6ac505-14b1-4680-987d-8374118db5e6] Running
	I0415 17:54:10.473871    1700 system_pods.go:89] "kube-apiserver-functional-662500" [452b972b-481c-47cc-a4ff-5017115e59b8] Running
	I0415 17:54:10.473871    1700 system_pods.go:89] "kube-controller-manager-functional-662500" [18238d9b-10a8-43bb-9f6b-4a0c27e6b107] Running
	I0415 17:54:10.473871    1700 system_pods.go:89] "kube-proxy-ppj2k" [8589b092-fe36-43c4-8c49-c3a4031b4e30] Running
	I0415 17:54:10.473871    1700 system_pods.go:89] "kube-scheduler-functional-662500" [7d236e19-c3b6-4344-a5d9-2b84c530e5a9] Running
	I0415 17:54:10.473871    1700 system_pods.go:89] "storage-provisioner" [c2731567-18b5-4c30-9bfe-257c96aa88e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0415 17:54:10.473871    1700 system_pods.go:126] duration metric: took 14.6925ms to wait for k8s-apps to be running ...
	I0415 17:54:10.473871    1700 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 17:54:10.486071    1700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 17:54:10.508910    1700 system_svc.go:56] duration metric: took 35.0373ms WaitForService to wait for kubelet
	I0415 17:54:10.508962    1700 kubeadm.go:576] duration metric: took 15.7978326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 17:54:10.508962    1700 node_conditions.go:102] verifying NodePressure condition ...
	I0415 17:54:10.508962    1700 round_trippers.go:463] GET https://127.0.0.1:51310/api/v1/nodes
	I0415 17:54:10.508962    1700 round_trippers.go:469] Request Headers:
	I0415 17:54:10.508962    1700 round_trippers.go:473]     Accept: application/json, */*
	I0415 17:54:10.508962    1700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0415 17:54:10.515373    1700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 17:54:10.515373    1700 round_trippers.go:577] Response Headers:
	I0415 17:54:10.515924    1700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f5b17461-d5ab-4006-9b60-e95090953432
	I0415 17:54:10.515924    1700 round_trippers.go:580]     Date: Mon, 15 Apr 2024 17:54:10 GMT
	I0415 17:54:10.515924    1700 round_trippers.go:580]     Audit-Id: 73b51524-625e-4adf-941e-5ac99abeed51
	I0415 17:54:10.515924    1700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0415 17:54:10.515924    1700 round_trippers.go:580]     Content-Type: application/json
	I0415 17:54:10.515924    1700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 21e52993-946c-45d3-8309-1a0dfccc43a5
	I0415 17:54:10.516126    1700 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"functional-662500","uid":"e2563e54-363e-4716-a128-07193b7b1536","resourceVersion":"402","creationTimestamp":"2024-04-15T17:53:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-662500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c","minikube.k8s.io/name":"functional-662500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_15T17_53_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"m
anagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":" [truncated 4915 chars]
	I0415 17:54:10.516126    1700 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0415 17:54:10.516126    1700 node_conditions.go:123] node cpu capacity is 16
	I0415 17:54:10.516685    1700 node_conditions.go:105] duration metric: took 7.7235ms to run NodePressure ...
	I0415 17:54:10.516685    1700 start.go:240] waiting for startup goroutines ...
	I0415 17:54:10.516685    1700 start.go:245] waiting for cluster config update ...
	I0415 17:54:10.516771    1700 start.go:254] writing updated cluster config ...
	I0415 17:54:10.531557    1700 ssh_runner.go:195] Run: rm -f paused
	I0415 17:54:10.667219    1700 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 17:54:10.670559    1700 out.go:177] * Done! kubectl is now configured to use "functional-662500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 15 17:53:47 functional-662500 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Apr 15 17:53:47 functional-662500 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Apr 15 17:53:47 functional-662500 systemd[1]: cri-docker.service: Deactivated successfully.
	Apr 15 17:53:47 functional-662500 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Apr 15 17:53:47 functional-662500 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Start docker client with request timeout 0s"
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Loaded network plugin cni"
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Start cri-dockerd grpc backend"
	Apr 15 17:53:47 functional-662500 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Apr 15 17:53:47 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-4mhcz_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a86b6c6c278378b09f14044d7eecb07772d24ad9925ff442198d8725ac2b798e\""
	Apr 15 17:53:48 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/25bfe852736fca40ea1939dad6b7bd1dc96e6c8e9fe3679660a96f812b2e3477/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 17:53:49 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/73eacf01c36c2ee6ce4f356f25e0a2df6c694ce1e89c320bf7a8e736eeb96cd5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 17:53:49 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/58bfd1b5429b723b559895843764c1bc577cd5254078dbdaba6d22f34a5b0299/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 17:53:49 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc64b224075f02c7a55245b33d7cf7c5c3283abb74e28d2197b0a3723ab22f3d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 17:53:49 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b7e707a1b32aea578cf3b871ac4c4a5a28e0f5f8929927c3bc56ae39b1583387/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 17:53:49 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb7e953d24cca9caeafcfebea3a1e06ebb7ffd18a527b55f21a2f513ce0b1bec/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 17:53:50 functional-662500 cri-dockerd[4953]: time="2024-04-15T17:53:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4622cf2886a8cda88b14262ba4d0a44f185013421d9cdbe68d0a31103d03c8f7/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 17:53:51 functional-662500 dockerd[4633]: time="2024-04-15T17:53:51.694743134Z" level=info msg="ignoring event" container=6eb778c978dcaecaf74ea5399d8dcb6896fef3190c9eb1fd76a981d8c7ec0801 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1f6a06d20d07d       6e38f40d628db       23 seconds ago       Running             storage-provisioner       2                   fc64b224075f0       storage-provisioner
	536d4c9f6dad7       cbb01a7bd410d       42 seconds ago       Running             coredns                   1                   4622cf2886a8c       coredns-76f75df574-4mhcz
	be034781840e1       6052a25da3f97       42 seconds ago       Running             kube-controller-manager   1                   b7e707a1b32ae       kube-controller-manager-functional-662500
	69b53289a44b9       39f995c9f1996       42 seconds ago       Running             kube-apiserver            1                   eb7e953d24cca       kube-apiserver-functional-662500
	6eb778c978dca       6e38f40d628db       43 seconds ago       Exited              storage-provisioner       1                   fc64b224075f0       storage-provisioner
	8c61e75fc1f38       8c390d98f50c0       43 seconds ago       Running             kube-scheduler            1                   58bfd1b5429b7       kube-scheduler-functional-662500
	7c90131b10d0e       a1d263b5dc5b0       43 seconds ago       Running             kube-proxy                1                   73eacf01c36c2       kube-proxy-ppj2k
	c8f35f71406e5       3861cfcd7c04c       44 seconds ago       Running             etcd                      1                   25bfe852736fc       etcd-functional-662500
	42f8b68c7ae81       cbb01a7bd410d       About a minute ago   Exited              coredns                   0                   a86b6c6c27837       coredns-76f75df574-4mhcz
	6be4de641bec8       a1d263b5dc5b0       About a minute ago   Exited              kube-proxy                0                   0d9c99dfd1111       kube-proxy-ppj2k
	fe55f3f5193a8       6052a25da3f97       About a minute ago   Exited              kube-controller-manager   0                   4b37e6028278d       kube-controller-manager-functional-662500
	56836e7bb4db5       39f995c9f1996       About a minute ago   Exited              kube-apiserver            0                   591f9de88db46       kube-apiserver-functional-662500
	1377b6de70cc5       8c390d98f50c0       About a minute ago   Exited              kube-scheduler            0                   948c23bb4bce4       kube-scheduler-functional-662500
	e90aafd236a1b       3861cfcd7c04c       About a minute ago   Exited              etcd                      0                   03fb501347966       etcd-functional-662500
	
	
	==> coredns [42f8b68c7ae8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [536d4c9f6dad] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53421 - 9846 "HINFO IN 6534793055760685979.2639886974151548480. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.088332181s
	
	
	==> describe nodes <==
	Name:               functional-662500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-662500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=functional-662500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T17_53_08_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 17:53:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-662500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 17:54:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 17:53:28 +0000   Mon, 15 Apr 2024 17:53:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 17:53:28 +0000   Mon, 15 Apr 2024 17:53:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 17:53:28 +0000   Mon, 15 Apr 2024 17:53:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 17:53:28 +0000   Mon, 15 Apr 2024 17:53:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-662500
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 c680cd76f098410c9c9c7023e48d9374
	  System UUID:                c680cd76f098410c9c9c7023e48d9374
	  Boot ID:                    65f83766-a313-43df-830a-07de4d414c98
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-4mhcz                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     72s
	  kube-system                 etcd-functional-662500                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kube-apiserver-functional-662500             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-controller-manager-functional-662500    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-ppj2k                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-functional-662500             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 68s   kube-proxy       
	  Normal  Starting                 34s   kube-proxy       
	  Normal  Starting                 86s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s   kubelet          Node functional-662500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s   kubelet          Node functional-662500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s   kubelet          Node functional-662500 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             85s   kubelet          Node functional-662500 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  85s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                75s   kubelet          Node functional-662500 status is now: NodeReady
	  Normal  RegisteredNode           73s   node-controller  Node functional-662500 event: Registered Node functional-662500 in Controller
	  Normal  RegisteredNode           24s   node-controller  Node functional-662500 event: Registered Node functional-662500 in Controller
	
	
	==> dmesg <==
	[  +0.033924] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.590234] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +3.384236] FS-Cache: Duplicate cookie detected
	[  +0.001115] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=000000005e57acda{9P.session} n=0000000031a5e59c
	[  +0.001222] FS-Cache: O-key=[10] '34323934393337393730'
	[  +0.000793] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=000000005e57acda{9P.session} n=000000002f4cef66
	[  +0.001172] FS-Cache: N-key=[10] '34323934393337393730'
	[  +0.009917] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001924] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002222] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.005521] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.009019] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001559] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.003501] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001754] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.060722] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.122252] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.812584] netlink: 'init': attribute type 4 has an invalid length.
	[Apr15 17:43] hrtimer: interrupt took 1148182 ns
	
	
	==> etcd [c8f35f71406e] <==
	{"level":"info","ts":"2024-04-15T17:53:50.590954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-04-15T17:53:50.593265Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-04-15T17:53:50.593672Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T17:53:50.593798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T17:53:50.595757Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-15T17:53:50.596076Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-15T17:53:50.596116Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-15T17:53:50.596273Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-15T17:53:50.596566Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-15T17:53:51.70598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-15T17:53:51.706246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-15T17:53:51.706292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-15T17:53:51.706321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-04-15T17:53:51.706338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-04-15T17:53:51.706381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-04-15T17:53:51.706401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-04-15T17:53:51.796156Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-662500 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T17:53:51.796196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T17:53:51.796332Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T17:53:51.798219Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T17:53:51.798537Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T17:53:51.80395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-15T17:53:51.804118Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T17:53:57.195747Z","caller":"traceutil/trace.go:171","msg":"trace[1787829801] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"102.526019ms","start":"2024-04-15T17:53:57.093198Z","end":"2024-04-15T17:53:57.195724Z","steps":["trace[1787829801] 'process raft request'  (duration: 102.102258ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:53:57.196145Z","caller":"traceutil/trace.go:171","msg":"trace[659219256] transaction","detail":"{read_only:false; number_of_response:1; response_revision:414; }","duration":"100.9958ms","start":"2024-04-15T17:53:57.095139Z","end":"2024-04-15T17:53:57.196134Z","steps":["trace[659219256] 'process raft request'  (duration: 100.354208ms)"],"step_count":1}
	
	
	==> etcd [e90aafd236a1] <==
	{"level":"info","ts":"2024-04-15T17:53:21.410716Z","caller":"traceutil/trace.go:171","msg":"trace[1367867415] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"108.872144ms","start":"2024-04-15T17:53:21.30182Z","end":"2024-04-15T17:53:21.410692Z","steps":["trace[1367867415] 'process raft request'  (duration: 102.788737ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:53:21.595981Z","caller":"traceutil/trace.go:171","msg":"trace[1004256911] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"100.7518ms","start":"2024-04-15T17:53:21.495151Z","end":"2024-04-15T17:53:21.595903Z","steps":["trace[1004256911] 'process raft request'  (duration: 18.9504ms)","trace[1004256911] 'compare'  (duration: 81.046812ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T17:53:21.795101Z","caller":"traceutil/trace.go:171","msg":"trace[1711212563] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"100.31825ms","start":"2024-04-15T17:53:21.694685Z","end":"2024-04-15T17:53:21.795003Z","steps":["trace[1711212563] 'compare'  (duration: 93.151618ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:53:21.795365Z","caller":"traceutil/trace.go:171","msg":"trace[2058751971] linearizableReadLoop","detail":"{readStateIndex:347; appliedIndex:346; }","duration":"100.586881ms","start":"2024-04-15T17:53:21.69476Z","end":"2024-04-15T17:53:21.795347Z","steps":["trace[2058751971] 'read index received'  (duration: 6.737582ms)","trace[2058751971] 'applied index is now lower than readState.Index'  (duration: 93.846599ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T17:53:21.796337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.381073ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-ppj2k\" ","response":"range_response_count:1 size:3434"}
	{"level":"info","ts":"2024-04-15T17:53:21.796645Z","caller":"traceutil/trace.go:171","msg":"trace[796798383] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-ppj2k; range_end:; response_count:1; response_revision:336; }","duration":"101.897133ms","start":"2024-04-15T17:53:21.694733Z","end":"2024-04-15T17:53:21.79663Z","steps":["trace[796798383] 'agreement among raft nodes before linearized reading'  (duration: 100.995128ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:53:21.802813Z","caller":"traceutil/trace.go:171","msg":"trace[28827913] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"106.741296ms","start":"2024-04-15T17:53:21.69606Z","end":"2024-04-15T17:53:21.802801Z","steps":["trace[28827913] 'process raft request'  (duration: 106.635183ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:53:21.802968Z","caller":"traceutil/trace.go:171","msg":"trace[1582853414] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"105.895698ms","start":"2024-04-15T17:53:21.69706Z","end":"2024-04-15T17:53:21.802956Z","steps":["trace[1582853414] 'process raft request'  (duration: 105.692274ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:53:21.80283Z","caller":"traceutil/trace.go:171","msg":"trace[937786849] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"103.394407ms","start":"2024-04-15T17:53:21.699426Z","end":"2024-04-15T17:53:21.80282Z","steps":["trace[937786849] 'process raft request'  (duration: 103.362403ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:53:21.813103Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.140291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-04-15T17:53:21.813607Z","caller":"traceutil/trace.go:171","msg":"trace[328656287] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:341; }","duration":"110.670052ms","start":"2024-04-15T17:53:21.702919Z","end":"2024-04-15T17:53:21.813588Z","steps":["trace[328656287] 'agreement among raft nodes before linearized reading'  (duration: 110.112788ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:53:21.814198Z","caller":"traceutil/trace.go:171","msg":"trace[1490664649] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"109.886861ms","start":"2024-04-15T17:53:21.704088Z","end":"2024-04-15T17:53:21.813975Z","steps":["trace[1490664649] 'process raft request'  (duration: 108.732427ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T17:53:22.291514Z","caller":"traceutil/trace.go:171","msg":"trace[1420711536] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"100.125833ms","start":"2024-04-15T17:53:22.191363Z","end":"2024-04-15T17:53:22.291489Z","steps":["trace[1420711536] 'process raft request'  (duration: 99.996114ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T17:53:22.920046Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.081568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-76f75df574\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2024-04-15T17:53:22.920827Z","caller":"traceutil/trace.go:171","msg":"trace[1888507990] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-76f75df574; range_end:; response_count:1; response_revision:358; }","duration":"111.941889ms","start":"2024-04-15T17:53:22.808801Z","end":"2024-04-15T17:53:22.920743Z","steps":["trace[1888507990] 'agreement among raft nodes before linearized reading'  (duration: 82.397648ms)","trace[1888507990] 'range keys from in-memory index tree'  (duration: 28.604509ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T17:53:34.494115Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-15T17:53:34.494444Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-662500","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-04-15T17:53:34.494551Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T17:53:34.494664Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T17:53:34.608088Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T17:53:34.608384Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-15T17:53:34.696397Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-04-15T17:53:34.709037Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-15T17:53:34.709627Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-15T17:53:34.709681Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-662500","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 17:54:33 up  5:34,  0 users,  load average: 1.09, 1.72, 1.32
	Linux functional-662500 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [56836e7bb4db] <==
	W0415 17:53:43.665460       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.732402       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.789054       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.795539       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.796968       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.804550       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.867474       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.887396       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.901541       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.904368       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:43.969520       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.016826       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.058079       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.097321       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.166151       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.183216       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.233261       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.249605       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.254182       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.392953       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.454021       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.479177       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.531830       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.555265       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 17:53:44.564751       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [69b53289a44b] <==
	I0415 17:53:56.704835       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0415 17:53:56.792987       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0415 17:53:56.892194       1 controller.go:85] Starting OpenAPI V3 controller
	I0415 17:53:56.892849       1 naming_controller.go:291] Starting NamingConditionController
	I0415 17:53:56.892879       1 establishing_controller.go:76] Starting EstablishingController
	I0415 17:53:56.801339       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0415 17:53:56.801409       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0415 17:53:56.704560       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0415 17:53:56.806239       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 17:53:56.893111       1 aggregator.go:165] initial CRD sync complete...
	I0415 17:53:56.893120       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 17:53:56.893129       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 17:53:56.893833       1 cache.go:39] Caches are synced for autoregister controller
	I0415 17:53:56.898054       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0415 17:53:56.904654       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0415 17:53:56.904889       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0415 17:53:56.904899       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0415 17:53:56.904960       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 17:53:57.088885       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0415 17:53:57.089454       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0415 17:53:57.091013       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0415 17:53:57.289189       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0415 17:53:57.709945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 17:54:09.615891       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 17:54:09.712068       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [be034781840e] <==
	I0415 17:54:09.597608       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0415 17:54:09.598617       1 shared_informer.go:318] Caches are synced for TTL
	I0415 17:54:09.600017       1 shared_informer.go:318] Caches are synced for service account
	I0415 17:54:09.601635       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0415 17:54:09.602691       1 shared_informer.go:318] Caches are synced for ephemeral
	I0415 17:54:09.603537       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0415 17:54:09.607100       1 shared_informer.go:318] Caches are synced for job
	I0415 17:54:09.607279       1 shared_informer.go:318] Caches are synced for node
	I0415 17:54:09.607323       1 range_allocator.go:174] "Sending events to api server"
	I0415 17:54:09.607365       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0415 17:54:09.607375       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0415 17:54:09.607383       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0415 17:54:09.607459       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0415 17:54:09.607833       1 shared_informer.go:318] Caches are synced for daemon sets
	I0415 17:54:09.689130       1 shared_informer.go:318] Caches are synced for crt configmap
	I0415 17:54:09.689328       1 shared_informer.go:318] Caches are synced for persistent volume
	I0415 17:54:09.689503       1 shared_informer.go:318] Caches are synced for PV protection
	I0415 17:54:09.710723       1 shared_informer.go:318] Caches are synced for disruption
	I0415 17:54:09.717662       1 shared_informer.go:318] Caches are synced for deployment
	I0415 17:54:09.718478       1 shared_informer.go:318] Caches are synced for attach detach
	I0415 17:54:09.789397       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 17:54:09.800115       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 17:54:10.103356       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 17:54:10.103452       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 17:54:10.198822       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [fe55f3f5193a] <==
	I0415 17:53:20.916458       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 17:53:20.916562       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 17:53:21.492447       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0415 17:53:21.601603       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ppj2k"
	I0415 17:53:21.601653       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-s6hbp"
	I0415 17:53:21.694093       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-4mhcz"
	I0415 17:53:21.805035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="392.345963ms"
	I0415 17:53:21.824705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.588274ms"
	I0415 17:53:21.824994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.315µs"
	I0415 17:53:21.826510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="59.907µs"
	I0415 17:53:22.013268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="222.431µs"
	I0415 17:53:22.293875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="151.321µs"
	I0415 17:53:22.692690       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0415 17:53:22.720028       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-s6hbp"
	I0415 17:53:22.793266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="101.095268ms"
	I0415 17:53:23.008598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="215.019735ms"
	I0415 17:53:23.008862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.206µs"
	I0415 17:53:25.528966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="247.534µs"
	I0415 17:53:25.622685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="114.216µs"
	I0415 17:53:25.693306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="52.793399ms"
	I0415 17:53:25.694050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.711µs"
	I0415 17:53:31.047720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="166.124µs"
	I0415 17:53:31.735471       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="58.108µs"
	I0415 17:53:31.761024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="56.608µs"
	I0415 17:53:31.785317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="109.216µs"
	
	
	==> kube-proxy [6be4de641bec] <==
	I0415 17:53:24.427304       1 server_others.go:72] "Using iptables proxy"
	I0415 17:53:24.442190       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0415 17:53:24.504368       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0415 17:53:24.504548       1 server_others.go:168] "Using iptables Proxier"
	I0415 17:53:24.508252       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0415 17:53:24.508328       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0415 17:53:24.508360       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 17:53:24.509203       1 server.go:865] "Version info" version="v1.29.3"
	I0415 17:53:24.509302       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 17:53:24.511157       1 config.go:315] "Starting node config controller"
	I0415 17:53:24.511288       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 17:53:24.511701       1 config.go:188] "Starting service config controller"
	I0415 17:53:24.511747       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 17:53:24.511723       1 config.go:97] "Starting endpoint slice config controller"
	I0415 17:53:24.511781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 17:53:24.612447       1 shared_informer.go:318] Caches are synced for service config
	I0415 17:53:24.612500       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 17:53:24.612521       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [7c90131b10d0] <==
	I0415 17:53:51.294001       1 server_others.go:72] "Using iptables proxy"
	E0415 17:53:51.303250       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0415 17:53:56.990537       1 server.go:1039] "Failed to retrieve node info" err="nodes \"functional-662500\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"
	I0415 17:53:59.019882       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0415 17:53:59.100607       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0415 17:53:59.100719       1 server_others.go:168] "Using iptables Proxier"
	I0415 17:53:59.104277       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0415 17:53:59.104378       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0415 17:53:59.104414       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 17:53:59.105040       1 server.go:865] "Version info" version="v1.29.3"
	I0415 17:53:59.105151       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 17:53:59.107004       1 config.go:188] "Starting service config controller"
	I0415 17:53:59.107133       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 17:53:59.107343       1 config.go:315] "Starting node config controller"
	I0415 17:53:59.107365       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 17:53:59.107404       1 config.go:97] "Starting endpoint slice config controller"
	I0415 17:53:59.107576       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 17:53:59.208818       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 17:53:59.208965       1 shared_informer.go:318] Caches are synced for node config
	I0415 17:53:59.208980       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [1377b6de70cc] <==
	W0415 17:53:04.625788       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 17:53:04.625914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 17:53:04.647166       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 17:53:04.647260       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 17:53:04.658563       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 17:53:04.658587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 17:53:04.704264       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 17:53:04.704367       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 17:53:04.713094       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 17:53:04.713197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 17:53:04.818031       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 17:53:04.818135       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 17:53:04.849364       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 17:53:04.849502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 17:53:04.937634       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 17:53:04.937765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 17:53:04.952510       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 17:53:04.952650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 17:53:05.018932       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 17:53:05.019048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0415 17:53:06.410454       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 17:53:34.597532       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0415 17:53:34.597708       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0415 17:53:34.598197       1 run.go:74] "command failed" err="finished without leader elect"
	I0415 17:53:34.598219       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [8c61e75fc1f3] <==
	I0415 17:53:52.822798       1 serving.go:380] Generated self-signed cert in-memory
	W0415 17:53:56.794669       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0415 17:53:56.794745       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 17:53:56.794766       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0415 17:53:56.794780       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0415 17:53:57.092321       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0415 17:53:57.092463       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 17:53:57.095389       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0415 17:53:57.095532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 17:53:57.096561       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0415 17:53:57.096640       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0415 17:53:57.196415       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 17:53:51 functional-662500 kubelet[2491]: I0415 17:53:51.901638    2491 status_manager.go:853] "Failed to get status for pod" podUID="cf22128e1f7ff41b8f2626e32eecb6cb" pod="kube-system/kube-apiserver-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:51 functional-662500 kubelet[2491]: I0415 17:53:51.903622    2491 status_manager.go:853] "Failed to get status for pod" podUID="9d3337215b06a22725acdc065643e199" pod="kube-system/kube-scheduler-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:51 functional-662500 kubelet[2491]: I0415 17:53:51.904446    2491 status_manager.go:853] "Failed to get status for pod" podUID="8589b092-fe36-43c4-8c49-c3a4031b4e30" pod="kube-system/kube-proxy-ppj2k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-ppj2k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.090419    2491 status_manager.go:853] "Failed to get status for pod" podUID="e691e826-53b3-4913-8044-49462113527f" pod="kube-system/coredns-76f75df574-4mhcz" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4mhcz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.090996    2491 status_manager.go:853] "Failed to get status for pod" podUID="c2731567-18b5-4c30-9bfe-257c96aa88e9" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.091626    2491 status_manager.go:853] "Failed to get status for pod" podUID="afc0bd85e5642afc4ef76d5bfc7ddf78" pod="kube-system/etcd-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.092408    2491 status_manager.go:853] "Failed to get status for pod" podUID="f213e110514642074fa794224f2ddd8f" pod="kube-system/kube-controller-manager-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.093071    2491 status_manager.go:853] "Failed to get status for pod" podUID="cf22128e1f7ff41b8f2626e32eecb6cb" pod="kube-system/kube-apiserver-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.093669    2491 status_manager.go:853] "Failed to get status for pod" podUID="9d3337215b06a22725acdc065643e199" pod="kube-system/kube-scheduler-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.094281    2491 status_manager.go:853] "Failed to get status for pod" podUID="8589b092-fe36-43c4-8c49-c3a4031b4e30" pod="kube-system/kube-proxy-ppj2k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-ppj2k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.196820    2491 status_manager.go:853] "Failed to get status for pod" podUID="e691e826-53b3-4913-8044-49462113527f" pod="kube-system/coredns-76f75df574-4mhcz" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4mhcz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.197213    2491 status_manager.go:853] "Failed to get status for pod" podUID="c2731567-18b5-4c30-9bfe-257c96aa88e9" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.197681    2491 status_manager.go:853] "Failed to get status for pod" podUID="afc0bd85e5642afc4ef76d5bfc7ddf78" pod="kube-system/etcd-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.198095    2491 status_manager.go:853] "Failed to get status for pod" podUID="f213e110514642074fa794224f2ddd8f" pod="kube-system/kube-controller-manager-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.198568    2491 status_manager.go:853] "Failed to get status for pod" podUID="cf22128e1f7ff41b8f2626e32eecb6cb" pod="kube-system/kube-apiserver-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.198913    2491 status_manager.go:853] "Failed to get status for pod" podUID="9d3337215b06a22725acdc065643e199" pod="kube-system/kube-scheduler-functional-662500" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-662500\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:52 functional-662500 kubelet[2491]: I0415 17:53:52.199235    2491 status_manager.go:853] "Failed to get status for pod" podUID="8589b092-fe36-43c4-8c49-c3a4031b4e30" pod="kube-system/kube-proxy-ppj2k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-ppj2k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Apr 15 17:53:53 functional-662500 kubelet[2491]: I0415 17:53:53.893357    2491 scope.go:117] "RemoveContainer" containerID="33f898f58e9eb827816081245343030b67a38159858512c15d7c981e3b3679b4"
	Apr 15 17:53:53 functional-662500 kubelet[2491]: I0415 17:53:53.893862    2491 scope.go:117] "RemoveContainer" containerID="6eb778c978dcaecaf74ea5399d8dcb6896fef3190c9eb1fd76a981d8c7ec0801"
	Apr 15 17:53:53 functional-662500 kubelet[2491]: E0415 17:53:53.894152    2491 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c2731567-18b5-4c30-9bfe-257c96aa88e9)\"" pod="kube-system/storage-provisioner" podUID="c2731567-18b5-4c30-9bfe-257c96aa88e9"
	Apr 15 17:53:55 functional-662500 kubelet[2491]: I0415 17:53:55.008822    2491 scope.go:117] "RemoveContainer" containerID="6eb778c978dcaecaf74ea5399d8dcb6896fef3190c9eb1fd76a981d8c7ec0801"
	Apr 15 17:53:55 functional-662500 kubelet[2491]: E0415 17:53:55.009253    2491 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c2731567-18b5-4c30-9bfe-257c96aa88e9)\"" pod="kube-system/storage-provisioner" podUID="c2731567-18b5-4c30-9bfe-257c96aa88e9"
	Apr 15 17:53:56 functional-662500 kubelet[2491]: E0415 17:53:56.893444    2491 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Apr 15 17:53:56 functional-662500 kubelet[2491]: E0415 17:53:56.893710    2491 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Apr 15 17:54:09 functional-662500 kubelet[2491]: I0415 17:54:09.194677    2491 scope.go:117] "RemoveContainer" containerID="6eb778c978dcaecaf74ea5399d8dcb6896fef3190c9eb1fd76a981d8c7ec0801"
	
	
	==> storage-provisioner [1f6a06d20d07] <==
	I0415 17:54:09.713485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 17:54:09.793625       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 17:54:09.793765       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 17:54:27.215469       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 17:54:27.215772       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd3a5d9f-2130-4a25-bb7d-9bb954a8d0e3", APIVersion:"v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-662500_e7a932d9-1676-4874-abc1-c0ff74fa92f9 became leader
	I0415 17:54:27.215936       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-662500_e7a932d9-1676-4874-abc1-c0ff74fa92f9!
	I0415 17:54:27.317060       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-662500_e7a932d9-1676-4874-abc1-c0ff74fa92f9!
	
	
	==> storage-provisioner [6eb778c978dc] <==
	I0415 17:53:51.400576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0415 17:53:51.493718       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:54:31.421403   11552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-662500 -n functional-662500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-662500 -n functional-662500: (1.2615282s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-662500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E0415 17:54:35.962454   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-662500 config unset cpus" to be -""- but got *"W0415 17:55:37.023434    2984 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-662500 config get cpus: exit status 14 (252.1816ms)

                                                
                                                
** stderr ** 
	W0415 17:55:37.336944   14368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-662500 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0415 17:55:37.336944   14368 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-662500 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0415 17:55:37.571107   11460 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-662500 config get cpus" to be -""- but got *"W0415 17:55:37.836554    6632 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-662500 config unset cpus" to be -""- but got *"W0415 17:55:38.092560   13228 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-662500 config get cpus: exit status 14 (243.3355ms)

                                                
                                                
** stderr ** 
	W0415 17:55:38.353502    5708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-662500 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0415 17:55:38.353502    5708 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (45.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-176700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p pause-176700 --alsologtostderr -v=5: exit status 80 (9.6013686s)

                                                
                                                
-- stdout --
	* Pausing node pause-176700 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:56:00.987664    7900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:56:01.074922    7900 out.go:291] Setting OutFile to fd 1980 ...
	I0415 18:56:01.074922    7900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:56:01.074922    7900 out.go:304] Setting ErrFile to fd 1732...
	I0415 18:56:01.074922    7900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:56:01.089940    7900 out.go:298] Setting JSON to false
	I0415 18:56:01.089940    7900 mustload.go:65] Loading cluster: pause-176700
	I0415 18:56:01.091944    7900 config.go:182] Loaded profile config "pause-176700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:56:01.119922    7900 cli_runner.go:164] Run: docker container inspect pause-176700 --format={{.State.Status}}
	I0415 18:56:01.316164    7900 host.go:66] Checking if "pause-176700" exists ...
	I0415 18:56:01.327166    7900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-176700
	I0415 18:56:01.509743    7900 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.33.0-1713175573-18634/minikube-v1.33.0-1713175573-18634-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.33.0-1713175573-18634-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///syste
m listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube4:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-176700 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotificatio
n:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0415 18:56:03.890181    7900 out.go:177] * Pausing node pause-176700 ... 
	I0415 18:56:06.646457    7900 host.go:66] Checking if "pause-176700" exists ...
	I0415 18:56:07.256899    7900 ssh_runner.go:195] Run: systemctl --version
	I0415 18:56:07.263990    7900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-176700
	I0415 18:56:07.438760    7900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54491 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-176700\id_rsa Username:docker}
	I0415 18:56:07.579963    7900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:56:07.607591    7900 pause.go:51] kubelet running: true
	I0415 18:56:07.620904    7900 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0415 18:56:07.948494    7900 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0415 18:56:07.999340    7900 docker.go:500] Pausing containers: [13fbd1683bed eb699f5aa465 91ef24769e92 d1480622dacd 4c054de8e645 0b01eadb1fef a5252c26ecd2 1ca71c7d5b12 8578361676bc 4f7d65ecb802 30608984b6be 6113d883501c]
	I0415 18:56:08.009905    7900 ssh_runner.go:195] Run: docker pause 13fbd1683bed eb699f5aa465 91ef24769e92 d1480622dacd 4c054de8e645 0b01eadb1fef a5252c26ecd2 1ca71c7d5b12 8578361676bc 4f7d65ecb802 30608984b6be 6113d883501c
	I0415 18:56:10.276482    7900 ssh_runner.go:235] Completed: docker pause 13fbd1683bed eb699f5aa465 91ef24769e92 d1480622dacd 4c054de8e645 0b01eadb1fef a5252c26ecd2 1ca71c7d5b12 8578361676bc 4f7d65ecb802 30608984b6be 6113d883501c: (2.2664713s)
	I0415 18:56:10.282395    7900 out.go:177] 
	W0415 18:56:10.285579    7900 out.go:239] X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause 13fbd1683bed eb699f5aa465 91ef24769e92 d1480622dacd 4c054de8e645 0b01eadb1fef a5252c26ecd2 1ca71c7d5b12 8578361676bc 4f7d65ecb802 30608984b6be 6113d883501c: Process exited with status 1
	stdout:
	13fbd1683bed
	eb699f5aa465
	91ef24769e92
	d1480622dacd
	4c054de8e645
	a5252c26ecd2
	1ca71c7d5b12
	8578361676bc
	4f7d65ecb802
	30608984b6be
	6113d883501c
	
	stderr:
	Error response from daemon: cannot pause container 0b01eadb1fef7d505f0162e50ddffd82c7afe1087422896badbf4e5a98544454: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause 13fbd1683bed eb699f5aa465 91ef24769e92 d1480622dacd 4c054de8e645 0b01eadb1fef a5252c26ecd2 1ca71c7d5b12 8578361676bc 4f7d65ecb802 30608984b6be 6113d883501c: Process exited with status 1
	stdout:
	13fbd1683bed
	eb699f5aa465
	91ef24769e92
	d1480622dacd
	4c054de8e645
	a5252c26ecd2
	1ca71c7d5b12
	8578361676bc
	4f7d65ecb802
	30608984b6be
	6113d883501c
	
	stderr:
	Error response from daemon: cannot pause container 0b01eadb1fef7d505f0162e50ddffd82c7afe1087422896badbf4e5a98544454: OCI runtime pause failed: unable to freeze: unknown
	
	W0415 18:56:10.285579    7900 out.go:239] * 
	* 
	W0415 18:56:10.431207    7900 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_10.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 18:56:10.437213    7900 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-windows-amd64.exe pause -p pause-176700 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-176700
helpers_test.go:235: (dbg) docker inspect pause-176700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4",
	        "Created": "2024-04-15T18:53:25.643713615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-15T18:53:27.163764918Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06fc94f477def8d6ec1f9decaa8d9de4b332d5597cd1759a7075056e46e00dfc",
	        "ResolvConfPath": "/var/lib/docker/containers/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4/hosts",
	        "LogPath": "/var/lib/docker/containers/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4-json.log",
	        "Name": "/pause-176700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-176700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-176700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/55abe54dda92b224570861c8bcb094199c2900e2f31c152a75573fd58f7a3a6b-init/diff:/var/lib/docker/overlay2/7d5cfefbd46c2f94744068cb810a43a2057da1935809c9054bd8d457b0f559e7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55abe54dda92b224570861c8bcb094199c2900e2f31c152a75573fd58f7a3a6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55abe54dda92b224570861c8bcb094199c2900e2f31c152a75573fd58f7a3a6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55abe54dda92b224570861c8bcb094199c2900e2f31c152a75573fd58f7a3a6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-176700",
	                "Source": "/var/lib/docker/volumes/pause-176700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-176700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-176700",
	                "name.minikube.sigs.k8s.io": "pause-176700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1259b9cee88996ca143a53bf68110a9ddfdca4220a7053c7671d0284d236a2bf",
	            "SandboxKey": "/var/run/docker/netns/1259b9cee889",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54491"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54492"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54494"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54495"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-176700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "cbf6b957465efa162b7322f43b2b3014fd373507b855130926e721f1b3cc3a84",
	                    "EndpointID": "11912731722f3e1757faece51423e293c850fcb0e4d9ded3902bccd99066efe2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "pause-176700",
	                        "828264e4428f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-176700 -n pause-176700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-176700 -n pause-176700: exit status 2 (1.4599635s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:56:10.849216   16072 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-176700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-176700 logs -n 25: (16.0796594s)
helpers_test.go:252: TestPause/serial/PauseAgain logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |               Args               |          Profile          |       User        |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	| stop    | -p NoKubernetes-344600           | NoKubernetes-344600       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC | 15 Apr 24 18:52 UTC |
	| start   | -p NoKubernetes-344600           | NoKubernetes-344600       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC | 15 Apr 24 18:52 UTC |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| ssh     | -p NoKubernetes-344600 sudo      | NoKubernetes-344600       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC |                     |
	|         | systemctl is-active --quiet      |                           |                   |                |                     |                     |
	|         | service kubelet                  |                           |                   |                |                     |                     |
	| delete  | -p NoKubernetes-344600           | NoKubernetes-344600       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC | 15 Apr 24 18:52 UTC |
	| start   | -p pause-176700 --memory=2048    | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC | 15 Apr 24 18:55 UTC |
	|         | --install-addons=false           |                           |                   |                |                     |                     |
	|         | --wait=all --driver=docker       |                           |                   |                |                     |                     |
	| delete  | -p stopped-upgrade-383200        | stopped-upgrade-383200    | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:53 UTC | 15 Apr 24 18:53 UTC |
	| start   | -p docker-flags-646100           | docker-flags-646100       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:53 UTC | 15 Apr 24 18:54 UTC |
	|         | --cache-images=false             |                           |                   |                |                     |                     |
	|         | --memory=2048                    |                           |                   |                |                     |                     |
	|         | --install-addons=false           |                           |                   |                |                     |                     |
	|         | --wait=false                     |                           |                   |                |                     |                     |
	|         | --docker-env=FOO=BAR             |                           |                   |                |                     |                     |
	|         | --docker-env=BAZ=BAT             |                           |                   |                |                     |                     |
	|         | --docker-opt=debug               |                           |                   |                |                     |                     |
	|         | --docker-opt=icc=true            |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| delete  | -p missing-upgrade-383200        | missing-upgrade-383200    | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:53 UTC | 15 Apr 24 18:54 UTC |
	| start   | -p force-systemd-env-712800      | force-systemd-env-712800  | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:55 UTC |
	|         | --memory=2048                    |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| delete  | -p running-upgrade-465600        | running-upgrade-465600    | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:54 UTC |
	| start   | -p force-systemd-flag-930300     | force-systemd-flag-930300 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:55 UTC |
	|         | --memory=2048 --force-systemd    |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| ssh     | docker-flags-646100 ssh          | docker-flags-646100       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:54 UTC |
	|         | sudo systemctl show docker       |                           |                   |                |                     |                     |
	|         | --property=Environment           |                           |                   |                |                     |                     |
	|         | --no-pager                       |                           |                   |                |                     |                     |
	| ssh     | docker-flags-646100 ssh          | docker-flags-646100       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:54 UTC |
	|         | sudo systemctl show docker       |                           |                   |                |                     |                     |
	|         | --property=ExecStart             |                           |                   |                |                     |                     |
	|         | --no-pager                       |                           |                   |                |                     |                     |
	| delete  | -p docker-flags-646100           | docker-flags-646100       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:54 UTC |
	| start   | -p kubernetes-upgrade-023700     | kubernetes-upgrade-023700 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC |                     |
	|         | --memory=2200                    |                           |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0     |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| start   | -p pause-176700                  | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	|         | --alsologtostderr -v=1           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| ssh     | force-systemd-env-712800         | force-systemd-env-712800  | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	|         | ssh docker info --format         |                           |                   |                |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |                |                     |                     |
	| delete  | -p force-systemd-env-712800      | force-systemd-env-712800  | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	| start   | -p cert-expiration-262100        | cert-expiration-262100    | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC |                     |
	|         | --memory=2048                    |                           |                   |                |                     |                     |
	|         | --cert-expiration=3m             |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| ssh     | force-systemd-flag-930300        | force-systemd-flag-930300 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	|         | ssh docker info --format         |                           |                   |                |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |                |                     |                     |
	| delete  | -p force-systemd-flag-930300     | force-systemd-flag-930300 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	| start   | -p cert-options-410800           | cert-options-410800       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC |                     |
	|         | --memory=2048                    |                           |                   |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1        |                           |                   |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15    |                           |                   |                |                     |                     |
	|         | --apiserver-names=localhost      |                           |                   |                |                     |                     |
	|         | --apiserver-names=www.google.com |                           |                   |                |                     |                     |
	|         | --apiserver-port=8555            |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	|         | --apiserver-name=localhost       |                           |                   |                |                     |                     |
	| pause   | -p pause-176700                  | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	| unpause | -p pause-176700                  | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:56 UTC |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	| pause   | -p pause-176700                  | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:56 UTC |                     |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	|---------|----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:55:47
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:55:47.331410   10032 out.go:291] Setting OutFile to fd 1924 ...
	I0415 18:55:47.331410   10032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:55:47.331410   10032 out.go:304] Setting ErrFile to fd 1768...
	I0415 18:55:47.331410   10032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:55:47.355402   10032 out.go:298] Setting JSON to false
	I0415 18:55:47.359410   10032 start.go:129] hostinfo: {"hostname":"minikube4","uptime":23817,"bootTime":1713183530,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 18:55:47.359410   10032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:55:47.365409   10032 out.go:177] * [cert-options-410800] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:55:47.369424   10032 notify.go:220] Checking for updates...
	I0415 18:55:47.371408   10032 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 18:55:47.373414   10032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:55:47.375405   10032 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 18:55:47.377409   10032 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:55:47.379412   10032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:55:44.257553    4824 pod_ready.go:102] pod "kube-controller-manager-pause-176700" in "kube-system" namespace has status "Ready":"False"
	I0415 18:55:46.759941    4824 pod_ready.go:102] pod "kube-controller-manager-pause-176700" in "kube-system" namespace has status "Ready":"False"
	I0415 18:55:43.233964    8252 cli_runner.go:217] Completed: docker run --rm --name cert-expiration-262100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-262100 --entrypoint /usr/bin/test -v cert-expiration-262100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib: (2.1137455s)
	I0415 18:55:43.233964    8252 oci.go:107] Successfully prepared a docker volume cert-expiration-262100
	I0415 18:55:43.233964    8252 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:55:43.233964    8252 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:55:43.245948    8252 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-262100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:55:47.382414   10032 config.go:182] Loaded profile config "cert-expiration-262100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:55:47.383419   10032 config.go:182] Loaded profile config "kubernetes-upgrade-023700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 18:55:47.383419   10032 config.go:182] Loaded profile config "pause-176700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:55:47.384417   10032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:55:47.703547   10032 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 18:55:47.714555   10032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:55:48.097699   10032 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:104 SystemTime:2024-04-15 18:55:48.055879854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersio
n:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:55:48.102742   10032 out.go:177] * Using the docker driver based on user configuration
	I0415 18:55:48.108691   10032 start.go:297] selected driver: docker
	I0415 18:55:48.108691   10032 start.go:901] validating driver "docker" against <nil>
	I0415 18:55:48.108691   10032 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:55:48.193692   10032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:55:48.592794   10032 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:104 SystemTime:2024-04-15 18:55:48.549432237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersio
n:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:55:48.592794   10032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:55:48.594715   10032 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 18:55:48.598710   10032 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 18:55:48.601711   10032 cni.go:84] Creating CNI manager for ""
	I0415 18:55:48.601711   10032 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:55:48.601711   10032 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 18:55:48.601711   10032 start.go:340] cluster config:
	{Name:cert-options-410800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:cert-options-410800 Namespace:default APIServerHAVIP: APIServerName:localhost APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0
.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:55:48.605723   10032 out.go:177] * Starting "cert-options-410800" primary control-plane node in "cert-options-410800" cluster
	I0415 18:55:48.611768   10032 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 18:55:48.615727   10032 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 18:55:48.251933    4824 pod_ready.go:92] pod "kube-controller-manager-pause-176700" in "kube-system" namespace has status "Ready":"True"
	I0415 18:55:48.251933    4824 pod_ready.go:81] duration metric: took 10.5172735s for pod "kube-controller-manager-pause-176700" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.251933    4824 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7lm47" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.265700    4824 pod_ready.go:92] pod "kube-proxy-7lm47" in "kube-system" namespace has status "Ready":"True"
	I0415 18:55:48.266716    4824 pod_ready.go:81] duration metric: took 14.7822ms for pod "kube-proxy-7lm47" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.266716    4824 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-176700" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.309697    4824 pod_ready.go:92] pod "kube-scheduler-pause-176700" in "kube-system" namespace has status "Ready":"True"
	I0415 18:55:48.309697    4824 pod_ready.go:81] duration metric: took 42.9793ms for pod "kube-scheduler-pause-176700" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.309697    4824 pod_ready.go:38] duration metric: took 11.1129647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 18:55:48.309697    4824 api_server.go:52] waiting for apiserver process to appear ...
	I0415 18:55:48.332719    4824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:55:48.377715    4824 api_server.go:72] duration metric: took 14.6701142s to wait for apiserver process to appear ...
	I0415 18:55:48.377715    4824 api_server.go:88] waiting for apiserver healthz status ...
	I0415 18:55:48.377715    4824 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54495/healthz ...
	I0415 18:55:48.403709    4824 api_server.go:279] https://127.0.0.1:54495/healthz returned 200:
	ok
	I0415 18:55:48.415721    4824 api_server.go:141] control plane version: v1.29.3
	I0415 18:55:48.415721    4824 api_server.go:131] duration metric: took 38.0038ms to wait for apiserver health ...
	I0415 18:55:48.415721    4824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 18:55:48.456696    4824 system_pods.go:59] 6 kube-system pods found
	I0415 18:55:48.456696    4824 system_pods.go:61] "coredns-76f75df574-t2b6w" [5488f80b-0761-4c97-a2c7-08aeca6362d0] Running
	I0415 18:55:48.456696    4824 system_pods.go:61] "etcd-pause-176700" [a924906f-acd2-4e3a-a031-0755dd7bd5e8] Running
	I0415 18:55:48.456696    4824 system_pods.go:61] "kube-apiserver-pause-176700" [7db98514-aaf6-4ffa-b4b7-119a3bee0522] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0415 18:55:48.456696    4824 system_pods.go:61] "kube-controller-manager-pause-176700" [0116200c-6f3c-4cd5-a04b-0afe6bbacff4] Running
	I0415 18:55:48.456696    4824 system_pods.go:61] "kube-proxy-7lm47" [c890678f-54c9-40e0-99c6-1ec4aa396a04] Running
	I0415 18:55:48.456696    4824 system_pods.go:61] "kube-scheduler-pause-176700" [25445fef-b0a6-4032-be00-e6f3636bde9f] Running
	I0415 18:55:48.456696    4824 system_pods.go:74] duration metric: took 40.9735ms to wait for pod list to return data ...
	I0415 18:55:48.456696    4824 default_sa.go:34] waiting for default service account to be created ...
	I0415 18:55:48.464714    4824 default_sa.go:45] found service account: "default"
	I0415 18:55:48.464714    4824 default_sa.go:55] duration metric: took 8.0179ms for default service account to be created ...
	I0415 18:55:48.464714    4824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 18:55:48.477716    4824 system_pods.go:86] 6 kube-system pods found
	I0415 18:55:48.477716    4824 system_pods.go:89] "coredns-76f75df574-t2b6w" [5488f80b-0761-4c97-a2c7-08aeca6362d0] Running
	I0415 18:55:48.477716    4824 system_pods.go:89] "etcd-pause-176700" [a924906f-acd2-4e3a-a031-0755dd7bd5e8] Running
	I0415 18:55:48.477716    4824 system_pods.go:89] "kube-apiserver-pause-176700" [7db98514-aaf6-4ffa-b4b7-119a3bee0522] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0415 18:55:48.477716    4824 system_pods.go:89] "kube-controller-manager-pause-176700" [0116200c-6f3c-4cd5-a04b-0afe6bbacff4] Running
	I0415 18:55:48.477716    4824 system_pods.go:89] "kube-proxy-7lm47" [c890678f-54c9-40e0-99c6-1ec4aa396a04] Running
	I0415 18:55:48.477716    4824 system_pods.go:89] "kube-scheduler-pause-176700" [25445fef-b0a6-4032-be00-e6f3636bde9f] Running
	I0415 18:55:48.477716    4824 system_pods.go:126] duration metric: took 13.0011ms to wait for k8s-apps to be running ...
	I0415 18:55:48.477716    4824 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 18:55:48.491709    4824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:55:48.519734    4824 system_svc.go:56] duration metric: took 42.0161ms WaitForService to wait for kubelet
	I0415 18:55:48.519734    4824 kubeadm.go:576] duration metric: took 14.8121266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:55:48.519734    4824 node_conditions.go:102] verifying NodePressure condition ...
	I0415 18:55:48.527715    4824 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0415 18:55:48.527715    4824 node_conditions.go:123] node cpu capacity is 16
	I0415 18:55:48.527715    4824 node_conditions.go:105] duration metric: took 7.9806ms to run NodePressure ...
	I0415 18:55:48.527715    4824 start.go:240] waiting for startup goroutines ...
	I0415 18:55:48.527715    4824 start.go:245] waiting for cluster config update ...
	I0415 18:55:48.527715    4824 start.go:254] writing updated cluster config ...
	I0415 18:55:48.545718    4824 ssh_runner.go:195] Run: rm -f paused
	I0415 18:55:48.723284    4824 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 18:55:48.729254    4824 out.go:177] * Done! kubectl is now configured to use "pause-176700" cluster and "default" namespace by default
	I0415 18:55:48.620733   10032 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:55:48.620733   10032 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 18:55:48.620733   10032 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:55:48.620733   10032 cache.go:56] Caching tarball of preloaded images
	I0415 18:55:48.621716   10032 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:55:48.621716   10032 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:55:48.621716   10032 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-options-410800\config.json ...
	I0415 18:55:48.621716   10032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-options-410800\config.json: {Name:mk419a5b829dfb7b072e0a250cdd188dd69de34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:55:48.842871   10032 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 18:55:48.842871   10032 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 18:55:48.842871   10032 cache.go:194] Successfully downloaded all kic artifacts
	I0415 18:55:48.842871   10032 start.go:360] acquireMachinesLock for cert-options-410800: {Name:mk6c8ea33aff7d577cac9175821791e55f059c1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:55:48.842871   10032 start.go:364] duration metric: took 0s to acquireMachinesLock for "cert-options-410800"
	I0415 18:55:48.842871   10032 start.go:93] Provisioning new machine with config: &{Name:cert-options-410800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:cert-options-410800 Namespace:default APIServerHAVIP: APISe
rverName:localhost APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:55:48.842871   10032 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:55:48.848865   10032 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:55:48.848865   10032 start.go:159] libmachine.API.Create for "cert-options-410800" (driver="docker")
	I0415 18:55:48.848865   10032 client.go:168] LocalClient.Create starting
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Decoding PEM data...
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Parsing certificate...
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Decoding PEM data...
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Parsing certificate...
	I0415 18:55:48.864855   10032 cli_runner.go:164] Run: docker network inspect cert-options-410800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:55:49.050863   10032 cli_runner.go:211] docker network inspect cert-options-410800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:55:49.062865   10032 network_create.go:281] running [docker network inspect cert-options-410800] to gather additional debugging logs...
	I0415 18:55:49.062865   10032 cli_runner.go:164] Run: docker network inspect cert-options-410800
	W0415 18:55:49.273889   10032 cli_runner.go:211] docker network inspect cert-options-410800 returned with exit code 1
	I0415 18:55:49.273889   10032 network_create.go:284] error running [docker network inspect cert-options-410800]: docker network inspect cert-options-410800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-options-410800 not found
	I0415 18:55:49.273889   10032 network_create.go:286] output of [docker network inspect cert-options-410800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-options-410800 not found
	
	** /stderr **
	I0415 18:55:49.284895   10032 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:55:49.524894   10032 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:55:49.556890   10032 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:55:49.588901   10032 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:55:49.619902   10032 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:55:49.652898   10032 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023b1b30}
	I0415 18:55:49.652898   10032 network_create.go:124] attempt to create docker network cert-options-410800 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0415 18:55:49.675908   10032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-410800 cert-options-410800
	I0415 18:55:49.993803   10032 network_create.go:108] docker network cert-options-410800 192.168.85.0/24 created
	I0415 18:55:49.993803   10032 kic.go:121] calculated static IP "192.168.85.2" for the "cert-options-410800" container
	I0415 18:55:50.023798   10032 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:55:50.235427   10032 cli_runner.go:164] Run: docker volume create cert-options-410800 --label name.minikube.sigs.k8s.io=cert-options-410800 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:55:50.528439   10032 oci.go:103] Successfully created a docker volume cert-options-410800
	I0415 18:55:50.540416   10032 cli_runner.go:164] Run: docker run --rm --name cert-options-410800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-410800 --entrypoint /usr/bin/test -v cert-options-410800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 18:55:57.710420    8708 docker.go:649] duration metric: took 14.3238015s to copy over tarball
	I0415 18:55:57.723195    8708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:56:03.799672    8708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.0761453s)
	I0415 18:56:03.799705    8708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:56:03.900664    8708 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:56:03.925329    8708 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0415 18:56:03.971219    8708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:56:04.149961    8708 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:56:10.385200   10032 cli_runner.go:217] Completed: docker run --rm --name cert-options-410800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-410800 --entrypoint /usr/bin/test -v cert-options-410800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib: (19.8438637s)
	I0415 18:56:10.385200   10032 oci.go:107] Successfully prepared a docker volume cert-options-410800
	I0415 18:56:10.385200   10032 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:56:10.385200   10032 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:56:10.397212   10032 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-410800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> Docker <==
	Apr 15 18:55:25 pause-176700 systemd[1]: cri-docker.service: Deactivated successfully.
	Apr 15 18:55:26 pause-176700 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Apr 15 18:55:26 pause-176700 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Start docker client with request timeout 0s"
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Loaded network plugin cni"
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Docker cri networking managed by network plugin cni"
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Setting cgroupDriver cgroupfs"
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Start cri-dockerd grpc backend"
	Apr 15 18:55:26 pause-176700 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Apr 15 18:55:27 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-t2b6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"0906724d189189bfbe7b166fdb5c256eb5ad7224fef384e5e028186cfe43b34b\""
	Apr 15 18:55:27 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-t2b6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bb6c804e1fc9ce5a8a3d7afedcc8f06a7f3a015195a46037907fac82fabc4fde\""
	Apr 15 18:55:27 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6113d883501c44e9eb6da1ad484b66c33cd6cd013ec53af107e2a7586263fe8d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:28 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/30608984b6bea274857748a419098729c8c94b5fb6765f543e2e715ac31fb44b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:28 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8578361676bc7c616e34bf33dfef3ab036d98bc4159b482981defc62ccaf499d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:28 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f7d65ecb802d1f387e0ff7779dbb244f6a1d3b47e2caa9a871449a64a009332/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:29 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1ca71c7d5b12e596e475f9926393325c1e74557ed5e2cffa834fb8e76c006f0e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:31 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91ef24769e92ca6daed5880cda244546f92a2b11ec1ae4a4391c35e9dee1fa45/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:56:08 pause-176700 dockerd[4569]: time="2024-04-15T18:56:08.339615761Z" level=error msg="Handler for POST /v1.45/containers/0b01eadb1fef/pause returned error: cannot pause container 0b01eadb1fef7d505f0162e50ddffd82c7afe1087422896badbf4e5a98544454: OCI runtime pause failed: unable to freeze: unknown" spanID=ebca0eb0c802957b traceID=5cc901435240bea76d9ea149130087fa
	Apr 15 18:56:09 pause-176700 cri-dockerd[4923]: W0415 18:56:09.411613    4923 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Apr 15 18:56:09 pause-176700 cri-dockerd[4923]: W0415 18:56:09.414657    4923 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	13fbd1683bed3       39f995c9f1996       42 seconds ago       Running             kube-apiserver            1                   91ef24769e92c       kube-apiserver-pause-176700
	eb699f5aa465b       cbb01a7bd410d       43 seconds ago       Running             coredns                   1                   1ca71c7d5b12e       coredns-76f75df574-t2b6w
	d1480622dacdd       a1d263b5dc5b0       45 seconds ago       Running             kube-proxy                1                   4f7d65ecb802d       kube-proxy-7lm47
	4c054de8e645b       6052a25da3f97       45 seconds ago       Running             kube-controller-manager   1                   8578361676bc7       kube-controller-manager-pause-176700
	0b01eadb1fef7       3861cfcd7c04c       45 seconds ago       Running             etcd                      1                   30608984b6bea       etcd-pause-176700
	a5252c26ecd24       8c390d98f50c0       45 seconds ago       Running             kube-scheduler            1                   6113d883501c4       kube-scheduler-pause-176700
	2c699e8585f76       cbb01a7bd410d       About a minute ago   Exited              coredns                   0                   bb6c804e1fc9c       coredns-76f75df574-t2b6w
	14ab20c2ac93f       a1d263b5dc5b0       About a minute ago   Exited              kube-proxy                0                   b11e00d242b6f       kube-proxy-7lm47
	c04926c5f5e24       3861cfcd7c04c       2 minutes ago        Exited              etcd                      0                   785dcd90e2759       etcd-pause-176700
	626d700a09bd0       6052a25da3f97       2 minutes ago        Exited              kube-controller-manager   0                   5853c44224113       kube-controller-manager-pause-176700
	92d4582c6a2bf       8c390d98f50c0       2 minutes ago        Exited              kube-scheduler            0                   e4b281a1bf20b       kube-scheduler-pause-176700
	c8642e8133ee5       39f995c9f1996       2 minutes ago        Exited              kube-apiserver            0                   cda52aa7032ec       kube-apiserver-pause-176700
	
	
	==> coredns [2c699e8585f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1070839197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:54:33.307) (total time: 21049ms):
	Trace[1070839197]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21047ms (18:54:54.351)
	Trace[1070839197]: [21.049516572s] [21.049516572s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1633633905]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:54:33.307) (total time: 21049ms):
	Trace[1633633905]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21049ms (18:54:54.353)
	Trace[1633633905]: [21.049860521s] [21.049860521s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1122612462]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:54:33.307) (total time: 21050ms):
	Trace[1122612462]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21049ms (18:54:54.353)
	Trace[1122612462]: [21.050913761s] [21.050913761s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eb699f5aa465] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48703 - 53199 "HINFO IN 5151283621995439190.7960359597370647878. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036245502s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[Apr15 18:56] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0b01eadb1fef] <==
	{"level":"warn","ts":"2024-04-15T18:56:06.300979Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13873777233191370758,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-15T18:56:06.801314Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13873777233191370758,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-15T18:56:07.302483Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13873777233191370758,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-15T18:56:07.684238Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"6.394835424s","expected-duration":"1s"}
	{"level":"info","ts":"2024-04-15T18:56:07.68637Z","caller":"traceutil/trace.go:171","msg":"trace[2139773263] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"6.397003611s","start":"2024-04-15T18:56:01.289303Z","end":"2024-04-15T18:56:07.686306Z","steps":["trace[2139773263] 'process raft request'  (duration: 6.396717373s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:56:07.686934Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:01.289279Z","time spent":"6.39722624s","remote":"127.0.0.1:45630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-176700.17c689049f09487c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-176700.17c689049f09487c\" value_size:598 lease:4650405196336594546 >> failure:<>"}
	{"level":"warn","ts":"2024-04-15T18:56:07.893633Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.593679178s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"","error":"context canceled"}
	{"level":"warn","ts":"2024-04-15T18:56:07.893832Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:01.290022Z","time spent":"6.603805196s","remote":"127.0.0.1:45730","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-15T18:56:07.893877Z","caller":"traceutil/trace.go:171","msg":"trace[639495352] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; }","duration":"5.593910609s","start":"2024-04-15T18:56:02.299899Z","end":"2024-04-15T18:56:07.89381Z","steps":["trace[639495352] 'agreement among raft nodes before linearized reading'  (duration: 5.593687779s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:56:07.893945Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:02.299891Z","time spent":"5.594038325s","remote":"127.0.0.1:45768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" "}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-15T18:56:07.894023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.594196046s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-04-15T18:56:07.894149Z","caller":"traceutil/trace.go:171","msg":"trace[863520601] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; }","duration":"5.594345465s","start":"2024-04-15T18:56:02.299795Z","end":"2024-04-15T18:56:07.89414Z","steps":["trace[863520601] 'agreement among raft nodes before linearized reading'  (duration: 5.594221249s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:56:07.894178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:02.29978Z","time spent":"5.594389671s","remote":"127.0.0.1:45768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-15T18:56:07.894281Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:01.290209Z","time spent":"6.60406773s","remote":"127.0.0.1:45838","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-15T18:56:07.894386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.604234953s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-176700\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-04-15T18:56:07.894415Z","caller":"traceutil/trace.go:171","msg":"trace[2117318061] range","detail":"{range_begin:/registry/csinodes/pause-176700; range_end:; }","duration":"6.604496787s","start":"2024-04-15T18:56:01.289911Z","end":"2024-04-15T18:56:07.894408Z","steps":["trace[2117318061] 'agreement among raft nodes before linearized reading'  (duration: 6.604458582s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:56:07.894436Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:01.289899Z","time spent":"6.604531592s","remote":"127.0.0.1:45934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/pause-176700\" "}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-15T18:56:09.424934Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"946.423979ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873777233191370762 > lease_revoke:<id:40898ee31d45ffcd>","response":"size:28"}
	{"level":"info","ts":"2024-04-15T18:56:09.428368Z","caller":"traceutil/trace.go:171","msg":"trace[260525202] linearizableReadLoop","detail":"{readStateIndex:558; appliedIndex:554; }","duration":"8.138185554s","start":"2024-04-15T18:56:01.290146Z","end":"2024-04-15T18:56:09.428332Z","steps":["trace[260525202] 'read index received'  (duration: 6.39557552s)","trace[260525202] 'applied index is now lower than readState.Index'  (duration: 1.742609334s)"],"step_count":2}
	
	
	==> etcd [c04926c5f5e2] <==
	{"level":"info","ts":"2024-04-15T18:55:01.01932Z","caller":"traceutil/trace.go:171","msg":"trace[1184546407] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"406.871741ms","start":"2024-04-15T18:55:00.612318Z","end":"2024-04-15T18:55:01.01919Z","steps":["trace[1184546407] 'process raft request'  (duration: 384.060244ms)","trace[1184546407] 'compare'  (duration: 21.902274ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:55:01.019791Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:55:00.612293Z","time spent":"407.283998ms","remote":"127.0.0.1:58416","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:416 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-04-15T18:55:01.024515Z","caller":"traceutil/trace.go:171","msg":"trace[1739414131] linearizableReadLoop","detail":"{readStateIndex:459; appliedIndex:456; }","duration":"401.065153ms","start":"2024-04-15T18:55:00.623431Z","end":"2024-04-15T18:55:01.024496Z","steps":["trace[1739414131] 'read index received'  (duration: 373.097856ms)","trace[1739414131] 'applied index is now lower than readState.Index'  (duration: 27.966297ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T18:55:01.024774Z","caller":"traceutil/trace.go:171","msg":"trace[213255890] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"412.31138ms","start":"2024-04-15T18:55:00.612448Z","end":"2024-04-15T18:55:01.024759Z","steps":["trace[213255890] 'process raft request'  (duration: 411.419359ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:55:01.024893Z","caller":"traceutil/trace.go:171","msg":"trace[1087059934] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"410.646054ms","start":"2024-04-15T18:55:00.614221Z","end":"2024-04-15T18:55:01.024867Z","steps":["trace[1087059934] 'process raft request'  (duration: 410.208594ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:55:01.025068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"401.625229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-15T18:55:01.025122Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:55:00.614211Z","time spent":"410.858682ms","remote":"127.0.0.1:34880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-76f75df574\" mod_revision:393 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-76f75df574\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-76f75df574\" > >"}
	{"level":"info","ts":"2024-04-15T18:55:01.025158Z","caller":"traceutil/trace.go:171","msg":"trace[1572055441] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:437; }","duration":"401.739145ms","start":"2024-04-15T18:55:00.623402Z","end":"2024-04-15T18:55:01.025141Z","steps":["trace[1572055441] 'agreement among raft nodes before linearized reading'  (duration: 401.608627ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:55:01.025196Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:55:00.623324Z","time spent":"401.860261ms","remote":"127.0.0.1:58214","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-15T18:55:01.025119Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:55:00.612432Z","time spent":"412.627923ms","remote":"127.0.0.1:58536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1298,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-t7m4w\" mod_revision:426 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-t7m4w\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-t7m4w\" > >"}
	{"level":"info","ts":"2024-04-15T18:55:05.706563Z","caller":"traceutil/trace.go:171","msg":"trace[93017411] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"236.952879ms","start":"2024-04-15T18:55:05.46952Z","end":"2024-04-15T18:55:05.706473Z","steps":["trace[93017411] 'process raft request'  (duration: 236.491117ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:55:06.891188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.781227ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873777233169007780 > lease_revoke:<id:40898ee31bf0c446>","response":"size:28"}
	{"level":"info","ts":"2024-04-15T18:55:06.891291Z","caller":"traceutil/trace.go:171","msg":"trace[1043465017] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:463; }","duration":"266.676415ms","start":"2024-04-15T18:55:06.624598Z","end":"2024-04-15T18:55:06.891274Z","steps":["trace[1043465017] 'read index received'  (duration: 56.307µs)","trace[1043465017] 'applied index is now lower than readState.Index'  (duration: 266.615807ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:55:06.891382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.768028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T18:55:06.891423Z","caller":"traceutil/trace.go:171","msg":"trace[152076260] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:440; }","duration":"266.841738ms","start":"2024-04-15T18:55:06.624566Z","end":"2024-04-15T18:55:06.891408Z","steps":["trace[152076260] 'agreement among raft nodes before linearized reading'  (duration: 266.757627ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:55:09.987361Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-15T18:55:09.988272Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-176700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"warn","ts":"2024-04-15T18:55:09.988397Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:55:09.994593Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:55:10.011296Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:55:10.011351Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-15T18:55:10.011428Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2024-04-15T18:55:10.099396Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-04-15T18:55:10.099668Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-04-15T18:55:10.0997Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-176700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	
	==> kernel <==
	 18:56:26 up  6:36,  0 users,  load average: 7.75, 7.35, 4.77
	Linux pause-176700 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [13fbd1683bed] <==
	E0415 18:56:07.895398       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0415 18:56:07.895512       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0415 18:56:07.895560       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0415 18:56:07.895589       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0415 18:56:07.895604       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0415 18:56:07.896973       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0415 18:56:07.897142       1 trace.go:236] Trace[444373023]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:31b25cc8-4383-4b5d-b0c9-eb7b3c6253da,client:192.168.103.2,api-group:,api-version:v1,name:coredns,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:POST (15-Apr-2024 18:56:02.298) (total time: 5598ms):
	Trace[444373023]: ---"Write to database call failed" len:156,err:Timeout: request did not complete within requested timeout - context canceled 5594ms (18:56:07.893)
	Trace[444373023]: [5.598375197s] [5.598375197s] END
	E0415 18:56:07.897194       1 timeout.go:142] post-timeout activity - time-elapsed: 3.807202ms, POST "/api/v1/namespaces/kube-system/serviceaccounts/coredns/token" result: <nil>
	E0415 18:56:07.897143       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0415 18:56:07.897205       1 trace.go:236] Trace[715867467]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:47430b5a-d74c-42e9-bc93-d54c36112bc7,client:192.168.103.2,api-group:coordination.k8s.io,api-version:v1,name:pause-176700,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-176700,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:PUT (15-Apr-2024 18:56:01.287) (total time: 6609ms):
	Trace[715867467]: ["GuaranteedUpdate etcd3" audit-id:47430b5a-d74c-42e9-bc93-d54c36112bc7,key:/leases/kube-node-lease/pause-176700,type:*coordination.Lease,resource:leases.coordination.k8s.io 6609ms (18:56:01.287)
	Trace[715867467]:  ---"Txn call failed" err:context canceled 6604ms (18:56:07.893)]
	Trace[715867467]: [6.609663374s] [6.609663374s] END
	E0415 18:56:07.897271       1 timeout.go:142] post-timeout activity - time-elapsed: 3.765797ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-176700" result: <nil>
	I0415 18:56:07.897339       1 trace.go:236] Trace[1519021224]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:67e718f7-2ebe-4de5-8c5d-e3dddf11db1d,client:192.168.103.2,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:POST (15-Apr-2024 18:56:01.288) (total time: 6609ms):
	Trace[1519021224]: ["Create etcd3" audit-id:67e718f7-2ebe-4de5-8c5d-e3dddf11db1d,key:/minions/pause-176700,type:*core.Node,resource:nodes 6607ms (18:56:01.289)
	Trace[1519021224]:  ---"Txn call failed" err:context canceled 6603ms (18:56:07.893)]
	Trace[1519021224]: [6.609295625s] [6.609295625s] END
	E0415 18:56:07.897522       1 timeout.go:142] post-timeout activity - time-elapsed: 4.132645ms, POST "/api/v1/nodes" result: <nil>
	E0415 18:56:07.898593       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0415 18:56:07.898698       1 trace.go:236] Trace[1474810988]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fdeeb321-f668-4f94-90f1-c19dd0ce1816,client:192.168.103.2,api-group:storage.k8s.io,api-version:v1,name:pause-176700,subresource:,namespace:,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes/pause-176700,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:GET (15-Apr-2024 18:56:01.289) (total time: 6609ms):
	Trace[1474810988]: [6.609644669s] [6.609644669s] END
	E0415 18:56:07.898754       1 timeout.go:142] post-timeout activity - time-elapsed: 5.178083ms, GET "/apis/storage.k8s.io/v1/csinodes/pause-176700" result: <nil>
	
	
	==> kube-apiserver [c8642e8133ee] <==
	W0415 18:55:19.062937       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.078006       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.079511       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.150940       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.157341       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.161347       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.233773       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.313633       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.367495       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.395208       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.418851       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.457517       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.465844       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.485717       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.501080       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.642785       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.668462       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.694148       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.704650       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.759647       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.794161       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.824383       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.824471       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.901548       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.945807       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c054de8e645] <==
	I0415 18:55:49.887533       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0415 18:55:49.887545       1 shared_informer.go:318] Caches are synced for deployment
	I0415 18:55:49.887558       1 shared_informer.go:318] Caches are synced for stateful set
	I0415 18:55:49.890757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="5.784389ms"
	I0415 18:55:49.890835       1 shared_informer.go:318] Caches are synced for cronjob
	I0415 18:55:49.885331       1 shared_informer.go:318] Caches are synced for TTL
	I0415 18:55:49.888518       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0415 18:55:49.888863       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-176700"
	I0415 18:55:49.887571       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0415 18:55:49.891586       1 event.go:376] "Event occurred" object="pause-176700" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-176700 event: Registered Node pause-176700 in Controller"
	I0415 18:55:49.893477       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0415 18:55:49.897845       1 range_allocator.go:174] "Sending events to api server"
	I0415 18:55:49.898774       1 shared_informer.go:318] Caches are synced for HPA
	I0415 18:55:49.899183       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0415 18:55:49.899568       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0415 18:55:49.899587       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0415 18:55:49.982807       1 shared_informer.go:318] Caches are synced for disruption
	I0415 18:55:50.038339       1 shared_informer.go:318] Caches are synced for persistent volume
	I0415 18:55:50.043859       1 shared_informer.go:318] Caches are synced for attach detach
	I0415 18:55:50.083052       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:55:50.083077       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:55:50.083105       1 shared_informer.go:318] Caches are synced for PV protection
	I0415 18:55:50.389806       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:55:50.457331       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:55:50.457481       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [626d700a09bd] <==
	I0415 18:54:25.063571       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:54:25.409570       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:54:25.409880       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 18:54:25.412833       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:54:25.828420       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7lm47"
	I0415 18:54:25.925319       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0415 18:54:26.096548       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-rc2w4"
	I0415 18:54:26.124006       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-t2b6w"
	I0415 18:54:29.803494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="3.876656269s"
	I0415 18:54:30.244417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="440.783004ms"
	I0415 18:54:30.244733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="86.912µs"
	I0415 18:54:30.244792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="32.604µs"
	I0415 18:54:30.435565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="105.014µs"
	I0415 18:54:30.766824       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0415 18:54:30.834363       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-rc2w4"
	I0415 18:54:30.876808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="110.204878ms"
	I0415 18:54:30.918585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="41.597291ms"
	I0415 18:54:30.919244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="87.412µs"
	I0415 18:54:33.811432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="125.916µs"
	I0415 18:54:34.861592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="109.414µs"
	I0415 18:54:47.777411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="85.309µs"
	I0415 18:54:48.144769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.108µs"
	I0415 18:54:48.164778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="83.708µs"
	I0415 18:55:01.026520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="419.558364ms"
	I0415 18:55:01.027176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="142.32µs"
	
	
	==> kube-proxy [14ab20c2ac93] <==
	I0415 18:54:32.732851       1 server_others.go:72] "Using iptables proxy"
	I0415 18:54:32.811467       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	I0415 18:54:32.934728       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0415 18:54:32.934801       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:54:32.941764       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0415 18:54:32.941873       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0415 18:54:32.941944       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:54:32.943144       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:54:32.993347       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:54:32.998080       1 config.go:188] "Starting service config controller"
	I0415 18:54:32.998288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:54:32.998247       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:54:32.998361       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:54:32.998272       1 config.go:315] "Starting node config controller"
	I0415 18:54:32.998420       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:54:33.100308       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:54:33.100509       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:54:33.100245       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [d1480622dacd] <==
	I0415 18:55:30.888011       1 server_others.go:72] "Using iptables proxy"
	E0415 18:55:30.892635       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-176700\": dial tcp 192.168.103.2:8443: connect: connection refused"
	E0415 18:55:32.093358       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-176700\": dial tcp 192.168.103.2:8443: connect: connection refused"
	I0415 18:55:37.402305       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	I0415 18:55:37.688343       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0415 18:55:37.688481       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:55:37.693348       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0415 18:55:37.693512       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0415 18:55:37.693722       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:55:37.694726       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:55:37.694866       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:55:37.696521       1 config.go:188] "Starting service config controller"
	I0415 18:55:37.700793       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:55:37.697874       1 config.go:315] "Starting node config controller"
	I0415 18:55:37.698529       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:55:37.702158       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:55:37.702172       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:55:37.702244       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:55:37.702282       1 shared_informer.go:318] Caches are synced for node config
	I0415 18:55:37.803268       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [92d4582c6a2b] <==
	W0415 18:54:08.526779       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:54:08.526969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:54:08.538352       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 18:54:08.538515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 18:54:08.543523       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:54:08.543623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 18:54:08.556758       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 18:54:08.556864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 18:54:08.597864       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:54:08.598004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:54:08.641391       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:54:08.641628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:54:08.713738       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 18:54:08.713930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 18:54:08.719673       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:54:08.719806       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:54:08.819969       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:54:08.820128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:54:08.955125       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:54:08.955223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0415 18:54:11.113360       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 18:55:09.988584       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0415 18:55:09.988779       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0415 18:55:09.988978       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0415 18:55:09.989587       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a5252c26ecd2] <==
	I0415 18:55:32.107108       1 serving.go:380] Generated self-signed cert in-memory
	W0415 18:55:37.201859       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0415 18:55:37.202115       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0415 18:55:37.202140       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0415 18:55:37.202154       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0415 18:55:37.386269       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0415 18:55:37.386318       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:55:37.391896       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0415 18:55:37.392078       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 18:55:37.395262       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0415 18:55:37.402851       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0415 18:55:37.494167       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.096157    6411 topology_manager.go:215] "Topology Admit Handler" podUID="c890678f-54c9-40e0-99c6-1ec4aa396a04" podNamespace="kube-system" podName="kube-proxy-7lm47"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.096514    6411 topology_manager.go:215] "Topology Admit Handler" podUID="5488f80b-0761-4c97-a2c7-08aeca6362d0" podNamespace="kube-system" podName="coredns-76f75df574-t2b6w"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.195584    6411 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.296723    6411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c890678f-54c9-40e0-99c6-1ec4aa396a04-lib-modules\") pod \"kube-proxy-7lm47\" (UID: \"c890678f-54c9-40e0-99c6-1ec4aa396a04\") " pod="kube-system/kube-proxy-7lm47"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.296891    6411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c890678f-54c9-40e0-99c6-1ec4aa396a04-xtables-lock\") pod \"kube-proxy-7lm47\" (UID: \"c890678f-54c9-40e0-99c6-1ec4aa396a04\") " pod="kube-system/kube-proxy-7lm47"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.296800    6411 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.297214    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy podName:c890678f-54c9-40e0-99c6-1ec4aa396a04 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:02.797188472 +0000 UTC m=+2.001620944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy") pod "kube-proxy-7lm47" (UID: "c890678f-54c9-40e0-99c6-1ec4aa396a04") : object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.297221    6411 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.297349    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume podName:5488f80b-0761-4c97-a2c7-08aeca6362d0 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:02.797336291 +0000 UTC m=+2.001768863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume") pod "coredns-76f75df574-t2b6w" (UID: "5488f80b-0761-4c97-a2c7-08aeca6362d0") : object "kube-system"/"coredns" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.801643    6411 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.801751    6411 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.801875    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy podName:c890678f-54c9-40e0-99c6-1ec4aa396a04 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:03.801852561 +0000 UTC m=+3.006285033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy") pod "kube-proxy-7lm47" (UID: "c890678f-54c9-40e0-99c6-1ec4aa396a04") : object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.801898    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume podName:5488f80b-0761-4c97-a2c7-08aeca6362d0 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:03.801890266 +0000 UTC m=+3.006322838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume") pod "coredns-76f75df574-t2b6w" (UID: "5488f80b-0761-4c97-a2c7-08aeca6362d0") : object "kube-system"/"coredns" not registered
	Apr 15 18:56:03 pause-176700 kubelet[6411]: E0415 18:56:03.812784    6411 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:03 pause-176700 kubelet[6411]: E0415 18:56:03.812923    6411 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 15 18:56:03 pause-176700 kubelet[6411]: E0415 18:56:03.812946    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy podName:c890678f-54c9-40e0-99c6-1ec4aa396a04 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:05.812924871 +0000 UTC m=+5.017357343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy") pod "kube-proxy-7lm47" (UID: "c890678f-54c9-40e0-99c6-1ec4aa396a04") : object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:03 pause-176700 kubelet[6411]: E0415 18:56:03.812989    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume podName:5488f80b-0761-4c97-a2c7-08aeca6362d0 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:05.812970577 +0000 UTC m=+5.017403149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume") pod "coredns-76f75df574-t2b6w" (UID: "5488f80b-0761-4c97-a2c7-08aeca6362d0") : object "kube-system"/"coredns" not registered
	Apr 15 18:56:05 pause-176700 kubelet[6411]: E0415 18:56:05.833975    6411 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:05 pause-176700 kubelet[6411]: E0415 18:56:05.834139    6411 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 15 18:56:05 pause-176700 kubelet[6411]: E0415 18:56:05.834424    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume podName:5488f80b-0761-4c97-a2c7-08aeca6362d0 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:09.834353095 +0000 UTC m=+9.038785567 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume") pod "coredns-76f75df574-t2b6w" (UID: "5488f80b-0761-4c97-a2c7-08aeca6362d0") : object "kube-system"/"coredns" not registered
	Apr 15 18:56:05 pause-176700 kubelet[6411]: E0415 18:56:05.834449    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy podName:c890678f-54c9-40e0-99c6-1ec4aa396a04 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:09.834440107 +0000 UTC m=+9.038872579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy") pod "kube-proxy-7lm47" (UID: "c890678f-54c9-40e0-99c6-1ec4aa396a04") : object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:07 pause-176700 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Apr 15 18:56:07 pause-176700 kubelet[6411]: I0415 18:56:07.811899    6411 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Apr 15 18:56:07 pause-176700 systemd[1]: kubelet.service: Deactivated successfully.
	Apr 15 18:56:07 pause-176700 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:56:12.286585   15596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-176700 -n pause-176700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-176700 -n pause-176700: exit status 2 (1.6342952s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:56:28.610033   12116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-176700" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-176700
helpers_test.go:235: (dbg) docker inspect pause-176700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4",
	        "Created": "2024-04-15T18:53:25.643713615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-15T18:53:27.163764918Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06fc94f477def8d6ec1f9decaa8d9de4b332d5597cd1759a7075056e46e00dfc",
	        "ResolvConfPath": "/var/lib/docker/containers/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4/hosts",
	        "LogPath": "/var/lib/docker/containers/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4/828264e4428fde705f121070914b4f5a5b569ed49fa9c69cf693401dddeed4c4-json.log",
	        "Name": "/pause-176700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-176700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-176700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/55abe54dda92b224570861c8bcb094199c2900e2f31c152a75573fd58f7a3a6b-init/diff:/var/lib/docker/overlay2/7d5cfefbd46c2f94744068cb810a43a2057da1935809c9054bd8d457b0f559e7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55abe54dda92b224570861c8bcb094199c2900e2f31c152a75573fd58f7a3a6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55abe54dda92b224570861c8bcb094199c2900e2f31c152a75573fd58f7a3a6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55abe54dda92b224570861c8bcb094199c2900e2f31c152a75573fd58f7a3a6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-176700",
	                "Source": "/var/lib/docker/volumes/pause-176700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-176700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-176700",
	                "name.minikube.sigs.k8s.io": "pause-176700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1259b9cee88996ca143a53bf68110a9ddfdca4220a7053c7671d0284d236a2bf",
	            "SandboxKey": "/var/run/docker/netns/1259b9cee889",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54491"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54492"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54494"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54495"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-176700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "cbf6b957465efa162b7322f43b2b3014fd373507b855130926e721f1b3cc3a84",
	                    "EndpointID": "11912731722f3e1757faece51423e293c850fcb0e4d9ded3902bccd99066efe2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "pause-176700",
	                        "828264e4428f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-176700 -n pause-176700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-176700 -n pause-176700: exit status 2 (1.6212816s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:56:30.443809   13776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-176700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-176700 logs -n 25: (12.9305771s)
helpers_test.go:252: TestPause/serial/PauseAgain logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |               Args               |          Profile          |       User        |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	| stop    | -p NoKubernetes-344600           | NoKubernetes-344600       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC | 15 Apr 24 18:52 UTC |
	| start   | -p NoKubernetes-344600           | NoKubernetes-344600       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC | 15 Apr 24 18:52 UTC |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| ssh     | -p NoKubernetes-344600 sudo      | NoKubernetes-344600       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC |                     |
	|         | systemctl is-active --quiet      |                           |                   |                |                     |                     |
	|         | service kubelet                  |                           |                   |                |                     |                     |
	| delete  | -p NoKubernetes-344600           | NoKubernetes-344600       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC | 15 Apr 24 18:52 UTC |
	| start   | -p pause-176700 --memory=2048    | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:52 UTC | 15 Apr 24 18:55 UTC |
	|         | --install-addons=false           |                           |                   |                |                     |                     |
	|         | --wait=all --driver=docker       |                           |                   |                |                     |                     |
	| delete  | -p stopped-upgrade-383200        | stopped-upgrade-383200    | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:53 UTC | 15 Apr 24 18:53 UTC |
	| start   | -p docker-flags-646100           | docker-flags-646100       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:53 UTC | 15 Apr 24 18:54 UTC |
	|         | --cache-images=false             |                           |                   |                |                     |                     |
	|         | --memory=2048                    |                           |                   |                |                     |                     |
	|         | --install-addons=false           |                           |                   |                |                     |                     |
	|         | --wait=false                     |                           |                   |                |                     |                     |
	|         | --docker-env=FOO=BAR             |                           |                   |                |                     |                     |
	|         | --docker-env=BAZ=BAT             |                           |                   |                |                     |                     |
	|         | --docker-opt=debug               |                           |                   |                |                     |                     |
	|         | --docker-opt=icc=true            |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| delete  | -p missing-upgrade-383200        | missing-upgrade-383200    | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:53 UTC | 15 Apr 24 18:54 UTC |
	| start   | -p force-systemd-env-712800      | force-systemd-env-712800  | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:55 UTC |
	|         | --memory=2048                    |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| delete  | -p running-upgrade-465600        | running-upgrade-465600    | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:54 UTC |
	| start   | -p force-systemd-flag-930300     | force-systemd-flag-930300 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:55 UTC |
	|         | --memory=2048 --force-systemd    |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| ssh     | docker-flags-646100 ssh          | docker-flags-646100       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:54 UTC |
	|         | sudo systemctl show docker       |                           |                   |                |                     |                     |
	|         | --property=Environment           |                           |                   |                |                     |                     |
	|         | --no-pager                       |                           |                   |                |                     |                     |
	| ssh     | docker-flags-646100 ssh          | docker-flags-646100       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:54 UTC |
	|         | sudo systemctl show docker       |                           |                   |                |                     |                     |
	|         | --property=ExecStart             |                           |                   |                |                     |                     |
	|         | --no-pager                       |                           |                   |                |                     |                     |
	| delete  | -p docker-flags-646100           | docker-flags-646100       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC | 15 Apr 24 18:54 UTC |
	| start   | -p kubernetes-upgrade-023700     | kubernetes-upgrade-023700 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:54 UTC |                     |
	|         | --memory=2200                    |                           |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0     |                           |                   |                |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| start   | -p pause-176700                  | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	|         | --alsologtostderr -v=1           |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| ssh     | force-systemd-env-712800         | force-systemd-env-712800  | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	|         | ssh docker info --format         |                           |                   |                |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |                |                     |                     |
	| delete  | -p force-systemd-env-712800      | force-systemd-env-712800  | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	| start   | -p cert-expiration-262100        | cert-expiration-262100    | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC |                     |
	|         | --memory=2048                    |                           |                   |                |                     |                     |
	|         | --cert-expiration=3m             |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	| ssh     | force-systemd-flag-930300        | force-systemd-flag-930300 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	|         | ssh docker info --format         |                           |                   |                |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |                |                     |                     |
	| delete  | -p force-systemd-flag-930300     | force-systemd-flag-930300 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	| start   | -p cert-options-410800           | cert-options-410800       | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC |                     |
	|         | --memory=2048                    |                           |                   |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1        |                           |                   |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15    |                           |                   |                |                     |                     |
	|         | --apiserver-names=localhost      |                           |                   |                |                     |                     |
	|         | --apiserver-names=www.google.com |                           |                   |                |                     |                     |
	|         | --apiserver-port=8555            |                           |                   |                |                     |                     |
	|         | --driver=docker                  |                           |                   |                |                     |                     |
	|         | --apiserver-name=localhost       |                           |                   |                |                     |                     |
	| pause   | -p pause-176700                  | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:55 UTC |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	| unpause | -p pause-176700                  | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:55 UTC | 15 Apr 24 18:56 UTC |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	| pause   | -p pause-176700                  | pause-176700              | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 18:56 UTC |                     |
	|         | --alsologtostderr -v=5           |                           |                   |                |                     |                     |
	|---------|----------------------------------|---------------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 18:55:47
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 18:55:47.331410   10032 out.go:291] Setting OutFile to fd 1924 ...
	I0415 18:55:47.331410   10032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:55:47.331410   10032 out.go:304] Setting ErrFile to fd 1768...
	I0415 18:55:47.331410   10032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:55:47.355402   10032 out.go:298] Setting JSON to false
	I0415 18:55:47.359410   10032 start.go:129] hostinfo: {"hostname":"minikube4","uptime":23817,"bootTime":1713183530,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 18:55:47.359410   10032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 18:55:47.365409   10032 out.go:177] * [cert-options-410800] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 18:55:47.369424   10032 notify.go:220] Checking for updates...
	I0415 18:55:47.371408   10032 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 18:55:47.373414   10032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 18:55:47.375405   10032 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 18:55:47.377409   10032 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 18:55:47.379412   10032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 18:55:44.257553    4824 pod_ready.go:102] pod "kube-controller-manager-pause-176700" in "kube-system" namespace has status "Ready":"False"
	I0415 18:55:46.759941    4824 pod_ready.go:102] pod "kube-controller-manager-pause-176700" in "kube-system" namespace has status "Ready":"False"
	I0415 18:55:43.233964    8252 cli_runner.go:217] Completed: docker run --rm --name cert-expiration-262100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-262100 --entrypoint /usr/bin/test -v cert-expiration-262100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib: (2.1137455s)
	I0415 18:55:43.233964    8252 oci.go:107] Successfully prepared a docker volume cert-expiration-262100
	I0415 18:55:43.233964    8252 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:55:43.233964    8252 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:55:43.245948    8252 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-262100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:55:47.382414   10032 config.go:182] Loaded profile config "cert-expiration-262100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:55:47.383419   10032 config.go:182] Loaded profile config "kubernetes-upgrade-023700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 18:55:47.383419   10032 config.go:182] Loaded profile config "pause-176700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:55:47.384417   10032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 18:55:47.703547   10032 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 18:55:47.714555   10032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:55:48.097699   10032 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:104 SystemTime:2024-04-15 18:55:48.055879854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersio
n:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:55:48.102742   10032 out.go:177] * Using the docker driver based on user configuration
	I0415 18:55:48.108691   10032 start.go:297] selected driver: docker
	I0415 18:55:48.108691   10032 start.go:901] validating driver "docker" against <nil>
	I0415 18:55:48.108691   10032 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 18:55:48.193692   10032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:55:48.592794   10032 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:104 SystemTime:2024-04-15 18:55:48.549432237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersio
n:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:55:48.592794   10032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 18:55:48.594715   10032 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 18:55:48.598710   10032 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 18:55:48.601711   10032 cni.go:84] Creating CNI manager for ""
	I0415 18:55:48.601711   10032 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 18:55:48.601711   10032 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 18:55:48.601711   10032 start.go:340] cluster config:
	{Name:cert-options-410800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:cert-options-410800 Namespace:default APIServerHAVIP: APIServerName:localhost APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0
.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:55:48.605723   10032 out.go:177] * Starting "cert-options-410800" primary control-plane node in "cert-options-410800" cluster
	I0415 18:55:48.611768   10032 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 18:55:48.615727   10032 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 18:55:48.251933    4824 pod_ready.go:92] pod "kube-controller-manager-pause-176700" in "kube-system" namespace has status "Ready":"True"
	I0415 18:55:48.251933    4824 pod_ready.go:81] duration metric: took 10.5172735s for pod "kube-controller-manager-pause-176700" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.251933    4824 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7lm47" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.265700    4824 pod_ready.go:92] pod "kube-proxy-7lm47" in "kube-system" namespace has status "Ready":"True"
	I0415 18:55:48.266716    4824 pod_ready.go:81] duration metric: took 14.7822ms for pod "kube-proxy-7lm47" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.266716    4824 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-176700" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.309697    4824 pod_ready.go:92] pod "kube-scheduler-pause-176700" in "kube-system" namespace has status "Ready":"True"
	I0415 18:55:48.309697    4824 pod_ready.go:81] duration metric: took 42.9793ms for pod "kube-scheduler-pause-176700" in "kube-system" namespace to be "Ready" ...
	I0415 18:55:48.309697    4824 pod_ready.go:38] duration metric: took 11.1129647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 18:55:48.309697    4824 api_server.go:52] waiting for apiserver process to appear ...
	I0415 18:55:48.332719    4824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:55:48.377715    4824 api_server.go:72] duration metric: took 14.6701142s to wait for apiserver process to appear ...
	I0415 18:55:48.377715    4824 api_server.go:88] waiting for apiserver healthz status ...
	I0415 18:55:48.377715    4824 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54495/healthz ...
	I0415 18:55:48.403709    4824 api_server.go:279] https://127.0.0.1:54495/healthz returned 200:
	ok
	I0415 18:55:48.415721    4824 api_server.go:141] control plane version: v1.29.3
	I0415 18:55:48.415721    4824 api_server.go:131] duration metric: took 38.0038ms to wait for apiserver health ...
	I0415 18:55:48.415721    4824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 18:55:48.456696    4824 system_pods.go:59] 6 kube-system pods found
	I0415 18:55:48.456696    4824 system_pods.go:61] "coredns-76f75df574-t2b6w" [5488f80b-0761-4c97-a2c7-08aeca6362d0] Running
	I0415 18:55:48.456696    4824 system_pods.go:61] "etcd-pause-176700" [a924906f-acd2-4e3a-a031-0755dd7bd5e8] Running
	I0415 18:55:48.456696    4824 system_pods.go:61] "kube-apiserver-pause-176700" [7db98514-aaf6-4ffa-b4b7-119a3bee0522] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0415 18:55:48.456696    4824 system_pods.go:61] "kube-controller-manager-pause-176700" [0116200c-6f3c-4cd5-a04b-0afe6bbacff4] Running
	I0415 18:55:48.456696    4824 system_pods.go:61] "kube-proxy-7lm47" [c890678f-54c9-40e0-99c6-1ec4aa396a04] Running
	I0415 18:55:48.456696    4824 system_pods.go:61] "kube-scheduler-pause-176700" [25445fef-b0a6-4032-be00-e6f3636bde9f] Running
	I0415 18:55:48.456696    4824 system_pods.go:74] duration metric: took 40.9735ms to wait for pod list to return data ...
	I0415 18:55:48.456696    4824 default_sa.go:34] waiting for default service account to be created ...
	I0415 18:55:48.464714    4824 default_sa.go:45] found service account: "default"
	I0415 18:55:48.464714    4824 default_sa.go:55] duration metric: took 8.0179ms for default service account to be created ...
	I0415 18:55:48.464714    4824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 18:55:48.477716    4824 system_pods.go:86] 6 kube-system pods found
	I0415 18:55:48.477716    4824 system_pods.go:89] "coredns-76f75df574-t2b6w" [5488f80b-0761-4c97-a2c7-08aeca6362d0] Running
	I0415 18:55:48.477716    4824 system_pods.go:89] "etcd-pause-176700" [a924906f-acd2-4e3a-a031-0755dd7bd5e8] Running
	I0415 18:55:48.477716    4824 system_pods.go:89] "kube-apiserver-pause-176700" [7db98514-aaf6-4ffa-b4b7-119a3bee0522] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0415 18:55:48.477716    4824 system_pods.go:89] "kube-controller-manager-pause-176700" [0116200c-6f3c-4cd5-a04b-0afe6bbacff4] Running
	I0415 18:55:48.477716    4824 system_pods.go:89] "kube-proxy-7lm47" [c890678f-54c9-40e0-99c6-1ec4aa396a04] Running
	I0415 18:55:48.477716    4824 system_pods.go:89] "kube-scheduler-pause-176700" [25445fef-b0a6-4032-be00-e6f3636bde9f] Running
	I0415 18:55:48.477716    4824 system_pods.go:126] duration metric: took 13.0011ms to wait for k8s-apps to be running ...
	I0415 18:55:48.477716    4824 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 18:55:48.491709    4824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:55:48.519734    4824 system_svc.go:56] duration metric: took 42.0161ms WaitForService to wait for kubelet
	I0415 18:55:48.519734    4824 kubeadm.go:576] duration metric: took 14.8121266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 18:55:48.519734    4824 node_conditions.go:102] verifying NodePressure condition ...
	I0415 18:55:48.527715    4824 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0415 18:55:48.527715    4824 node_conditions.go:123] node cpu capacity is 16
	I0415 18:55:48.527715    4824 node_conditions.go:105] duration metric: took 7.9806ms to run NodePressure ...
	I0415 18:55:48.527715    4824 start.go:240] waiting for startup goroutines ...
	I0415 18:55:48.527715    4824 start.go:245] waiting for cluster config update ...
	I0415 18:55:48.527715    4824 start.go:254] writing updated cluster config ...
	I0415 18:55:48.545718    4824 ssh_runner.go:195] Run: rm -f paused
	I0415 18:55:48.723284    4824 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 18:55:48.729254    4824 out.go:177] * Done! kubectl is now configured to use "pause-176700" cluster and "default" namespace by default
	I0415 18:55:48.620733   10032 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:55:48.620733   10032 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 18:55:48.620733   10032 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 18:55:48.620733   10032 cache.go:56] Caching tarball of preloaded images
	I0415 18:55:48.621716   10032 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 18:55:48.621716   10032 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 18:55:48.621716   10032 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-options-410800\config.json ...
	I0415 18:55:48.621716   10032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-options-410800\config.json: {Name:mk419a5b829dfb7b072e0a250cdd188dd69de34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:55:48.842871   10032 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 18:55:48.842871   10032 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 18:55:48.842871   10032 cache.go:194] Successfully downloaded all kic artifacts
	I0415 18:55:48.842871   10032 start.go:360] acquireMachinesLock for cert-options-410800: {Name:mk6c8ea33aff7d577cac9175821791e55f059c1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 18:55:48.842871   10032 start.go:364] duration metric: took 0s to acquireMachinesLock for "cert-options-410800"
	I0415 18:55:48.842871   10032 start.go:93] Provisioning new machine with config: &{Name:cert-options-410800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:cert-options-410800 Namespace:default APIServerHAVIP: APISe
rverName:localhost APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 18:55:48.842871   10032 start.go:125] createHost starting for "" (driver="docker")
	I0415 18:55:48.848865   10032 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 18:55:48.848865   10032 start.go:159] libmachine.API.Create for "cert-options-410800" (driver="docker")
	I0415 18:55:48.848865   10032 client.go:168] LocalClient.Create starting
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Decoding PEM data...
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Parsing certificate...
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Decoding PEM data...
	I0415 18:55:48.848865   10032 main.go:141] libmachine: Parsing certificate...
	I0415 18:55:48.864855   10032 cli_runner.go:164] Run: docker network inspect cert-options-410800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 18:55:49.050863   10032 cli_runner.go:211] docker network inspect cert-options-410800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 18:55:49.062865   10032 network_create.go:281] running [docker network inspect cert-options-410800] to gather additional debugging logs...
	I0415 18:55:49.062865   10032 cli_runner.go:164] Run: docker network inspect cert-options-410800
	W0415 18:55:49.273889   10032 cli_runner.go:211] docker network inspect cert-options-410800 returned with exit code 1
	I0415 18:55:49.273889   10032 network_create.go:284] error running [docker network inspect cert-options-410800]: docker network inspect cert-options-410800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-options-410800 not found
	I0415 18:55:49.273889   10032 network_create.go:286] output of [docker network inspect cert-options-410800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-options-410800 not found
	
	** /stderr **
	I0415 18:55:49.284895   10032 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 18:55:49.524894   10032 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:55:49.556890   10032 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:55:49.588901   10032 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:55:49.619902   10032 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 18:55:49.652898   10032 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023b1b30}
	I0415 18:55:49.652898   10032 network_create.go:124] attempt to create docker network cert-options-410800 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0415 18:55:49.675908   10032 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-410800 cert-options-410800
	I0415 18:55:49.993803   10032 network_create.go:108] docker network cert-options-410800 192.168.85.0/24 created
	I0415 18:55:49.993803   10032 kic.go:121] calculated static IP "192.168.85.2" for the "cert-options-410800" container
	I0415 18:55:50.023798   10032 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 18:55:50.235427   10032 cli_runner.go:164] Run: docker volume create cert-options-410800 --label name.minikube.sigs.k8s.io=cert-options-410800 --label created_by.minikube.sigs.k8s.io=true
	I0415 18:55:50.528439   10032 oci.go:103] Successfully created a docker volume cert-options-410800
	I0415 18:55:50.540416   10032 cli_runner.go:164] Run: docker run --rm --name cert-options-410800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-410800 --entrypoint /usr/bin/test -v cert-options-410800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 18:55:57.710420    8708 docker.go:649] duration metric: took 14.3238015s to copy over tarball
	I0415 18:55:57.723195    8708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 18:56:03.799672    8708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.0761453s)
	I0415 18:56:03.799705    8708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 18:56:03.900664    8708 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 18:56:03.925329    8708 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0415 18:56:03.971219    8708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:56:04.149961    8708 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 18:56:10.385200   10032 cli_runner.go:217] Completed: docker run --rm --name cert-options-410800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-410800 --entrypoint /usr/bin/test -v cert-options-410800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib: (19.8438637s)
	I0415 18:56:10.385200   10032 oci.go:107] Successfully prepared a docker volume cert-options-410800
	I0415 18:56:10.385200   10032 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 18:56:10.385200   10032 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 18:56:10.397212   10032 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-410800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 18:56:10.370215    8252 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-262100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir: (27.1230083s)
	I0415 18:56:10.370215    8252 kic.go:203] duration metric: took 27.1349919s to extract preloaded images to volume ...
	I0415 18:56:10.383247    8252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 18:56:10.737207    8252 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:94 SystemTime:2024-04-15 18:56:10.692380906 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 18:56:10.753201    8252 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0415 18:56:11.191807    8252 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-262100 --name cert-expiration-262100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-262100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-262100 --network cert-expiration-262100 --ip 192.168.76.2 --volume cert-expiration-262100:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b
	I0415 18:56:10.823197    8708 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.6729265s)
	I0415 18:56:10.836206    8708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 18:56:10.891209    8708 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0415 18:56:10.891209    8708 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0415 18:56:10.891209    8708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0415 18:56:10.909803    8708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 18:56:10.942017    8708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0415 18:56:10.947979    8708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0415 18:56:10.947979    8708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 18:56:10.947979    8708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0415 18:56:10.950986    8708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:56:10.951985    8708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0415 18:56:10.953990    8708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0415 18:56:10.964014    8708 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0415 18:56:10.980991    8708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0415 18:56:10.983021    8708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:56:10.983021    8708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0415 18:56:10.994429    8708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0415 18:56:11.005752    8708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0415 18:56:11.026808    8708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0415 18:56:11.036802    8708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	W0415 18:56:11.111801    8708 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0415 18:56:11.204871    8708 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0415 18:56:11.284813    8708 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0415 18:56:11.379821    8708 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0415 18:56:11.473746    8708 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0415 18:56:11.551695    8708 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 18:56:11.564648    8708 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	W0415 18:56:11.569656    8708 image.go:187] authn lookup for registry.k8s.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0415 18:56:11.604679    8708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0415 18:56:11.604679    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0415 18:56:11.605655    8708 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 18:56:11.615706    8708 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0415 18:56:11.619679    8708 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 18:56:11.621661    8708 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 18:56:11.626672    8708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0415 18:56:11.626672    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0415 18:56:11.626672    8708 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0415 18:56:11.639684    8708 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
	W0415 18:56:11.680140    8708 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0415 18:56:11.683671    8708 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0415 18:56:11.705653    8708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0415 18:56:11.705653    8708 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0415 18:56:11.705653    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0415 18:56:11.705653    8708 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0415 18:56:11.718667    8708 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0415 18:56:11.718667    8708 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	W0415 18:56:11.772637    8708 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0415 18:56:11.803933    8708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0415 18:56:11.803933    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0415 18:56:11.803933    8708 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0415 18:56:11.815970    8708 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0415 18:56:11.826220    8708 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0415 18:56:11.863929    8708 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0415 18:56:11.864928    8708 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0415 18:56:11.890944    8708 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0415 18:56:11.924556    8708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0415 18:56:11.925591    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.7.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0415 18:56:11.925591    8708 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
	I0415 18:56:11.939584    8708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0415 18:56:11.939584    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0415 18:56:11.939584    8708 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
	I0415 18:56:11.939584    8708 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0415 18:56:11.955559    8708 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0415 18:56:11.971557    8708 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0415 18:56:11.998554    8708 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0415 18:56:12.009612    8708 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0415 18:56:12.023855    8708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0415 18:56:12.023930    8708 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.13-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0415 18:56:12.023992    8708 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0415 18:56:12.036258    8708 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
	I0415 18:56:12.077026    8708 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0415 18:56:12.077026    8708 cache_images.go:92] duration metric: took 1.1857613s to LoadCachedImages
	W0415 18:56:12.077026    8708 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0: The system cannot find the file specified.
	I0415 18:56:12.078036    8708 kubeadm.go:928] updating node { 192.168.112.2 8443 v1.20.0 docker true true} ...
	I0415 18:56:12.078036    8708 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-023700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.112.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-023700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 18:56:12.087035    8708 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 18:56:12.195590    8708 cni.go:84] Creating CNI manager for ""
	I0415 18:56:12.195590    8708 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 18:56:12.195590    8708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 18:56:12.195590    8708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.112.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-023700 NodeName:kubernetes-upgrade-023700 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.112.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.112.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0415 18:56:12.195590    8708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.112.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-023700"
	  kubeletExtraArgs:
	    node-ip: 192.168.112.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.112.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 18:56:12.208579    8708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0415 18:56:12.225581    8708 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 18:56:12.239583    8708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 18:56:12.256587    8708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (352 bytes)
	I0415 18:56:12.287587    8708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 18:56:12.318592    8708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0415 18:56:12.360605    8708 ssh_runner.go:195] Run: grep 192.168.112.2	control-plane.minikube.internal$ /etc/hosts
	I0415 18:56:12.370614    8708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.112.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 18:56:12.400587    8708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 18:56:12.555438    8708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 18:56:12.590603    8708 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700 for IP: 192.168.112.2
	I0415 18:56:12.590652    8708 certs.go:194] generating shared ca certs ...
	I0415 18:56:12.590652    8708 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:56:12.591581    8708 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0415 18:56:12.591985    8708 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0415 18:56:12.592192    8708 certs.go:256] generating profile certs ...
	I0415 18:56:12.592864    8708 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\client.key
	I0415 18:56:12.593168    8708 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\client.crt with IP's: []
	I0415 18:56:12.682222    8708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\client.crt ...
	I0415 18:56:12.682222    8708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\client.crt: {Name:mk99dc51e7b42cbd52a67ca3352b52cfaeafcaee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:56:12.682900    8708 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\client.key ...
	I0415 18:56:12.683816    8708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\client.key: {Name:mkf8def31d02f1fb8cd6d5c25a20a7c29dd76f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:56:12.684822    8708 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.key.46afbe48
	I0415 18:56:12.684822    8708 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.crt.46afbe48 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.112.2]
	I0415 18:56:12.914277    8708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.crt.46afbe48 ...
	I0415 18:56:12.914277    8708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.crt.46afbe48: {Name:mkfaf5133cbf2ea802d89b4bf33d963e02db3a2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:56:12.916441    8708 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.key.46afbe48 ...
	I0415 18:56:12.916529    8708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.key.46afbe48: {Name:mk1f8667fb4f9ad9c6e4e84256cb0f0304ce38f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:56:12.917934    8708 certs.go:381] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.crt.46afbe48 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.crt
	I0415 18:56:12.930629    8708 certs.go:385] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.key.46afbe48 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.key
	I0415 18:56:12.932073    8708 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\proxy-client.key
	I0415 18:56:12.932289    8708 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\proxy-client.crt with IP's: []
	I0415 18:56:13.100480    8708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\proxy-client.crt ...
	I0415 18:56:13.100480    8708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\proxy-client.crt: {Name:mk8c853862cc55d207131540aaf3aded86acdc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:56:13.101482    8708 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\proxy-client.key ...
	I0415 18:56:13.101482    8708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\proxy-client.key: {Name:mkaa03ce6d7ccc35e9074e5cefec7a1616372973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 18:56:13.112477    8708 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem (1338 bytes)
	W0415 18:56:13.112477    8708 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748_empty.pem, impossibly tiny 0 bytes
	I0415 18:56:13.113485    8708 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0415 18:56:13.113485    8708 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0415 18:56:13.113485    8708 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 18:56:13.113485    8708 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 18:56:13.114476    8708 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem (1708 bytes)
	I0415 18:56:13.115470    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 18:56:13.225663    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0415 18:56:13.285223    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 18:56:13.344209    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 18:56:13.391222    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0415 18:56:13.435220    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 18:56:13.496247    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 18:56:13.545241    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-023700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 18:56:13.625818    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem --> /usr/share/ca-certificates/11748.pem (1338 bytes)
	I0415 18:56:13.707091    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem --> /usr/share/ca-certificates/117482.pem (1708 bytes)
	I0415 18:56:13.750055    8708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 18:56:13.799081    8708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 18:56:13.855068    8708 ssh_runner.go:195] Run: openssl version
	I0415 18:56:13.888067    8708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11748.pem && ln -fs /usr/share/ca-certificates/11748.pem /etc/ssl/certs/11748.pem"
	I0415 18:56:13.933228    8708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11748.pem
	I0415 18:56:13.949621    8708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:52 /usr/share/ca-certificates/11748.pem
	I0415 18:56:13.961627    8708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11748.pem
	I0415 18:56:13.998643    8708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11748.pem /etc/ssl/certs/51391683.0"
	I0415 18:56:14.040685    8708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117482.pem && ln -fs /usr/share/ca-certificates/117482.pem /etc/ssl/certs/117482.pem"
	I0415 18:56:14.071623    8708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117482.pem
	I0415 18:56:14.080649    8708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:52 /usr/share/ca-certificates/117482.pem
	I0415 18:56:14.102636    8708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117482.pem
	I0415 18:56:14.128637    8708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117482.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 18:56:14.159301    8708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 18:56:14.189301    8708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:56:14.203307    8708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:56:14.215306    8708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 18:56:14.245311    8708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 18:56:14.288313    8708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 18:56:14.303371    8708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 18:56:14.304366    8708 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-023700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-023700 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 18:56:14.315371    8708 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 18:56:14.380211    8708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 18:56:14.410222    8708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 18:56:14.427208    8708 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0415 18:56:14.440211    8708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 18:56:14.462826    8708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 18:56:14.462826    8708 kubeadm.go:156] found existing configuration files:
	
	I0415 18:56:14.477825    8708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 18:56:14.503850    8708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 18:56:14.518833    8708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 18:56:14.552831    8708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 18:56:13.266202    8252 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-262100 --name cert-expiration-262100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-262100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-262100 --network cert-expiration-262100 --ip 192.168.76.2 --volume cert-expiration-262100:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b: (2.0742984s)
	I0415 18:56:13.282244    8252 cli_runner.go:164] Run: docker container inspect cert-expiration-262100 --format={{.State.Running}}
	I0415 18:56:13.490211    8252 cli_runner.go:164] Run: docker container inspect cert-expiration-262100 --format={{.State.Status}}
	I0415 18:56:13.711058    8252 cli_runner.go:164] Run: docker exec cert-expiration-262100 stat /var/lib/dpkg/alternatives/iptables
	I0415 18:56:14.030641    8252 oci.go:144] the created container "cert-expiration-262100" has a running status.
	I0415 18:56:14.030641    8252 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-262100\id_rsa...
	I0415 18:56:14.460826    8252 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-262100\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0415 18:56:14.703115    8252 cli_runner.go:164] Run: docker container inspect cert-expiration-262100 --format={{.State.Status}}
	I0415 18:56:14.891626    8252 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0415 18:56:14.892627    8252 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-262100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0415 18:56:15.159413    8252 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-262100\id_rsa...
	I0415 18:56:14.578220    8708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 18:56:14.676119    8708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 18:56:14.706115    8708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 18:56:14.722115    8708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 18:56:14.734119    8708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 18:56:14.764121    8708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 18:56:14.780138    8708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 18:56:14.792139    8708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 18:56:14.811124    8708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0415 18:56:15.299657    8708 kubeadm.go:309] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0415 18:56:15.299657    8708 kubeadm.go:309] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0415 18:56:15.417618    8708 kubeadm.go:309] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.0.1. Latest validated version: 19.03
	I0415 18:56:15.710953    8708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 18:56:17.964507    8252 cli_runner.go:164] Run: docker container inspect cert-expiration-262100 --format={{.State.Status}}
	I0415 18:56:18.136114    8252 machine.go:94] provisionDockerMachine start ...
	I0415 18:56:18.148116    8252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-262100
	I0415 18:56:18.328441    8252 main.go:141] libmachine: Using SSH client type: native
	I0415 18:56:18.337433    8252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 54710 <nil> <nil>}
	I0415 18:56:18.337433    8252 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 18:56:18.509938    8252 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-262100
	
	I0415 18:56:18.509938    8252 ubuntu.go:169] provisioning hostname "cert-expiration-262100"
	I0415 18:56:18.522937    8252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-262100
	I0415 18:56:18.692321    8252 main.go:141] libmachine: Using SSH client type: native
	I0415 18:56:18.693323    8252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 54710 <nil> <nil>}
	I0415 18:56:18.693323    8252 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-262100 && echo "cert-expiration-262100" | sudo tee /etc/hostname
	I0415 18:56:18.871237    8252 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-262100
	
	I0415 18:56:18.880614    8252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-262100
	I0415 18:56:19.054236    8252 main.go:141] libmachine: Using SSH client type: native
	I0415 18:56:19.055252    8252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 54710 <nil> <nil>}
	I0415 18:56:19.055252    8252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-262100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-262100/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-262100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 18:56:19.223040    8252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 18:56:19.223040    8252 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0415 18:56:19.223040    8252 ubuntu.go:177] setting up certificates
	I0415 18:56:19.223040    8252 provision.go:84] configureAuth start
	I0415 18:56:19.233111    8252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-262100
	I0415 18:56:19.406347    8252 provision.go:143] copyHostCerts
	I0415 18:56:19.406824    8252 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0415 18:56:19.406824    8252 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0415 18:56:19.407123    8252 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0415 18:56:19.408116    8252 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0415 18:56:19.408116    8252 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0415 18:56:19.408805    8252 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 18:56:19.409823    8252 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0415 18:56:19.409866    8252 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0415 18:56:19.410020    8252 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 18:56:19.411882    8252 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-expiration-262100 san=[127.0.0.1 192.168.76.2 cert-expiration-262100 localhost minikube]
	I0415 18:56:19.715514    8252 provision.go:177] copyRemoteCerts
	I0415 18:56:19.728517    8252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 18:56:19.747439    8252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-262100
	I0415 18:56:19.907642    8252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54710 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-262100\id_rsa Username:docker}
	I0415 18:56:20.031363    8252 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0415 18:56:20.070367    8252 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0415 18:56:20.112797    8252 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 18:56:20.149781    8252 provision.go:87] duration metric: took 926.6978ms to configureAuth
	I0415 18:56:20.149781    8252 ubuntu.go:193] setting minikube options for container-runtime
	I0415 18:56:20.149781    8252 config.go:182] Loaded profile config "cert-expiration-262100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:56:20.158776    8252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-262100
	I0415 18:56:20.337078    8252 main.go:141] libmachine: Using SSH client type: native
	I0415 18:56:20.337078    8252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 54710 <nil> <nil>}
	I0415 18:56:20.337078    8252 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 18:56:20.519614    8252 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0415 18:56:20.519614    8252 ubuntu.go:71] root file system type: overlay
	I0415 18:56:20.520243    8252 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 18:56:20.541120    8252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-262100
	I0415 18:56:20.712863    8252 main.go:141] libmachine: Using SSH client type: native
	I0415 18:56:20.713859    8252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 54710 <nil> <nil>}
	I0415 18:56:20.714398    8252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 18:56:20.924135    8252 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 18:56:20.935385    8252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-262100
	I0415 18:56:21.103995    8252 main.go:141] libmachine: Using SSH client type: native
	I0415 18:56:21.104750    8252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 54710 <nil> <nil>}
	I0415 18:56:21.104750    8252 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	
	
	==> Docker <==
	Apr 15 18:55:26 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:26Z" level=info msg="Start cri-dockerd grpc backend"
	Apr 15 18:55:26 pause-176700 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Apr 15 18:55:27 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-t2b6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"0906724d189189bfbe7b166fdb5c256eb5ad7224fef384e5e028186cfe43b34b\""
	Apr 15 18:55:27 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-t2b6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bb6c804e1fc9ce5a8a3d7afedcc8f06a7f3a015195a46037907fac82fabc4fde\""
	Apr 15 18:55:27 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6113d883501c44e9eb6da1ad484b66c33cd6cd013ec53af107e2a7586263fe8d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:28 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/30608984b6bea274857748a419098729c8c94b5fb6765f543e2e715ac31fb44b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:28 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8578361676bc7c616e34bf33dfef3ab036d98bc4159b482981defc62ccaf499d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:28 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f7d65ecb802d1f387e0ff7779dbb244f6a1d3b47e2caa9a871449a64a009332/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:29 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1ca71c7d5b12e596e475f9926393325c1e74557ed5e2cffa834fb8e76c006f0e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:55:31 pause-176700 cri-dockerd[4923]: time="2024-04-15T18:55:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91ef24769e92ca6daed5880cda244546f92a2b11ec1ae4a4391c35e9dee1fa45/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Apr 15 18:56:08 pause-176700 dockerd[4569]: time="2024-04-15T18:56:08.339615761Z" level=error msg="Handler for POST /v1.45/containers/0b01eadb1fef/pause returned error: cannot pause container 0b01eadb1fef7d505f0162e50ddffd82c7afe1087422896badbf4e5a98544454: OCI runtime pause failed: unable to freeze: unknown" spanID=ebca0eb0c802957b traceID=5cc901435240bea76d9ea149130087fa
	Apr 15 18:56:09 pause-176700 cri-dockerd[4923]: W0415 18:56:09.411613    4923 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Apr 15 18:56:09 pause-176700 cri-dockerd[4923]: W0415 18:56:09.414657    4923 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Apr 15 18:56:13 pause-176700 dockerd[4569]: 2024/04/15 18:56:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:13 pause-176700 dockerd[4569]: 2024/04/15 18:56:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:25 pause-176700 dockerd[4569]: 2024/04/15 18:56:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:25 pause-176700 dockerd[4569]: 2024/04/15 18:56:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:27 pause-176700 dockerd[4569]: 2024/04/15 18:56:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:27 pause-176700 dockerd[4569]: 2024/04/15 18:56:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:27 pause-176700 dockerd[4569]: 2024/04/15 18:56:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:27 pause-176700 dockerd[4569]: 2024/04/15 18:56:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:27 pause-176700 dockerd[4569]: 2024/04/15 18:56:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:28 pause-176700 dockerd[4569]: 2024/04/15 18:56:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:28 pause-176700 dockerd[4569]: 2024/04/15 18:56:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 18:56:28 pause-176700 dockerd[4569]: 2024/04/15 18:56:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	13fbd1683bed3       39f995c9f1996       About a minute ago   Running             kube-apiserver            1                   91ef24769e92c       kube-apiserver-pause-176700
	eb699f5aa465b       cbb01a7bd410d       About a minute ago   Running             coredns                   1                   1ca71c7d5b12e       coredns-76f75df574-t2b6w
	d1480622dacdd       a1d263b5dc5b0       About a minute ago   Running             kube-proxy                1                   4f7d65ecb802d       kube-proxy-7lm47
	4c054de8e645b       6052a25da3f97       About a minute ago   Running             kube-controller-manager   1                   8578361676bc7       kube-controller-manager-pause-176700
	0b01eadb1fef7       3861cfcd7c04c       About a minute ago   Running             etcd                      1                   30608984b6bea       etcd-pause-176700
	a5252c26ecd24       8c390d98f50c0       About a minute ago   Running             kube-scheduler            1                   6113d883501c4       kube-scheduler-pause-176700
	2c699e8585f76       cbb01a7bd410d       2 minutes ago        Exited              coredns                   0                   bb6c804e1fc9c       coredns-76f75df574-t2b6w
	14ab20c2ac93f       a1d263b5dc5b0       2 minutes ago        Exited              kube-proxy                0                   b11e00d242b6f       kube-proxy-7lm47
	c04926c5f5e24       3861cfcd7c04c       2 minutes ago        Exited              etcd                      0                   785dcd90e2759       etcd-pause-176700
	626d700a09bd0       6052a25da3f97       2 minutes ago        Exited              kube-controller-manager   0                   5853c44224113       kube-controller-manager-pause-176700
	92d4582c6a2bf       8c390d98f50c0       2 minutes ago        Exited              kube-scheduler            0                   e4b281a1bf20b       kube-scheduler-pause-176700
	c8642e8133ee5       39f995c9f1996       2 minutes ago        Exited              kube-apiserver            0                   cda52aa7032ec       kube-apiserver-pause-176700
	
	
	==> coredns [2c699e8585f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1070839197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:54:33.307) (total time: 21049ms):
	Trace[1070839197]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21047ms (18:54:54.351)
	Trace[1070839197]: [21.049516572s] [21.049516572s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1633633905]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:54:33.307) (total time: 21049ms):
	Trace[1633633905]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21049ms (18:54:54.353)
	Trace[1633633905]: [21.049860521s] [21.049860521s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1122612462]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Apr-2024 18:54:33.307) (total time: 21050ms):
	Trace[1122612462]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21049ms (18:54:54.353)
	Trace[1122612462]: [21.050913761s] [21.050913761s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eb699f5aa465] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48703 - 53199 "HINFO IN 5151283621995439190.7960359597370647878. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036245502s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[Apr15 18:56] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0b01eadb1fef] <==
	{"level":"warn","ts":"2024-04-15T18:56:06.300979Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13873777233191370758,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-15T18:56:06.801314Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13873777233191370758,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-15T18:56:07.302483Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13873777233191370758,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-15T18:56:07.684238Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"6.394835424s","expected-duration":"1s"}
	{"level":"info","ts":"2024-04-15T18:56:07.68637Z","caller":"traceutil/trace.go:171","msg":"trace[2139773263] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"6.397003611s","start":"2024-04-15T18:56:01.289303Z","end":"2024-04-15T18:56:07.686306Z","steps":["trace[2139773263] 'process raft request'  (duration: 6.396717373s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:56:07.686934Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:01.289279Z","time spent":"6.39722624s","remote":"127.0.0.1:45630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-176700.17c689049f09487c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-176700.17c689049f09487c\" value_size:598 lease:4650405196336594546 >> failure:<>"}
	{"level":"warn","ts":"2024-04-15T18:56:07.893633Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.593679178s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"","error":"context canceled"}
	{"level":"warn","ts":"2024-04-15T18:56:07.893832Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:01.290022Z","time spent":"6.603805196s","remote":"127.0.0.1:45730","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-15T18:56:07.893877Z","caller":"traceutil/trace.go:171","msg":"trace[639495352] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; }","duration":"5.593910609s","start":"2024-04-15T18:56:02.299899Z","end":"2024-04-15T18:56:07.89381Z","steps":["trace[639495352] 'agreement among raft nodes before linearized reading'  (duration: 5.593687779s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:56:07.893945Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:02.299891Z","time spent":"5.594038325s","remote":"127.0.0.1:45768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" "}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-15T18:56:07.894023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.594196046s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-04-15T18:56:07.894149Z","caller":"traceutil/trace.go:171","msg":"trace[863520601] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; }","duration":"5.594345465s","start":"2024-04-15T18:56:02.299795Z","end":"2024-04-15T18:56:07.89414Z","steps":["trace[863520601] 'agreement among raft nodes before linearized reading'  (duration: 5.594221249s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:56:07.894178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:02.29978Z","time spent":"5.594389671s","remote":"127.0.0.1:45768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-15T18:56:07.894281Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:01.290209Z","time spent":"6.60406773s","remote":"127.0.0.1:45838","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-15T18:56:07.894386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.604234953s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-176700\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-04-15T18:56:07.894415Z","caller":"traceutil/trace.go:171","msg":"trace[2117318061] range","detail":"{range_begin:/registry/csinodes/pause-176700; range_end:; }","duration":"6.604496787s","start":"2024-04-15T18:56:01.289911Z","end":"2024-04-15T18:56:07.894408Z","steps":["trace[2117318061] 'agreement among raft nodes before linearized reading'  (duration: 6.604458582s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:56:07.894436Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:56:01.289899Z","time spent":"6.604531592s","remote":"127.0.0.1:45934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/pause-176700\" "}
	2024/04/15 18:56:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-15T18:56:09.424934Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"946.423979ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873777233191370762 > lease_revoke:<id:40898ee31d45ffcd>","response":"size:28"}
	{"level":"info","ts":"2024-04-15T18:56:09.428368Z","caller":"traceutil/trace.go:171","msg":"trace[260525202] linearizableReadLoop","detail":"{readStateIndex:558; appliedIndex:554; }","duration":"8.138185554s","start":"2024-04-15T18:56:01.290146Z","end":"2024-04-15T18:56:09.428332Z","steps":["trace[260525202] 'read index received'  (duration: 6.39557552s)","trace[260525202] 'applied index is now lower than readState.Index'  (duration: 1.742609334s)"],"step_count":2}
	
	
	==> etcd [c04926c5f5e2] <==
	{"level":"info","ts":"2024-04-15T18:55:01.01932Z","caller":"traceutil/trace.go:171","msg":"trace[1184546407] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"406.871741ms","start":"2024-04-15T18:55:00.612318Z","end":"2024-04-15T18:55:01.01919Z","steps":["trace[1184546407] 'process raft request'  (duration: 384.060244ms)","trace[1184546407] 'compare'  (duration: 21.902274ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:55:01.019791Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:55:00.612293Z","time spent":"407.283998ms","remote":"127.0.0.1:58416","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:416 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-04-15T18:55:01.024515Z","caller":"traceutil/trace.go:171","msg":"trace[1739414131] linearizableReadLoop","detail":"{readStateIndex:459; appliedIndex:456; }","duration":"401.065153ms","start":"2024-04-15T18:55:00.623431Z","end":"2024-04-15T18:55:01.024496Z","steps":["trace[1739414131] 'read index received'  (duration: 373.097856ms)","trace[1739414131] 'applied index is now lower than readState.Index'  (duration: 27.966297ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T18:55:01.024774Z","caller":"traceutil/trace.go:171","msg":"trace[213255890] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"412.31138ms","start":"2024-04-15T18:55:00.612448Z","end":"2024-04-15T18:55:01.024759Z","steps":["trace[213255890] 'process raft request'  (duration: 411.419359ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:55:01.024893Z","caller":"traceutil/trace.go:171","msg":"trace[1087059934] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"410.646054ms","start":"2024-04-15T18:55:00.614221Z","end":"2024-04-15T18:55:01.024867Z","steps":["trace[1087059934] 'process raft request'  (duration: 410.208594ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:55:01.025068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"401.625229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-15T18:55:01.025122Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:55:00.614211Z","time spent":"410.858682ms","remote":"127.0.0.1:34880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-76f75df574\" mod_revision:393 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-76f75df574\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-76f75df574\" > >"}
	{"level":"info","ts":"2024-04-15T18:55:01.025158Z","caller":"traceutil/trace.go:171","msg":"trace[1572055441] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:437; }","duration":"401.739145ms","start":"2024-04-15T18:55:00.623402Z","end":"2024-04-15T18:55:01.025141Z","steps":["trace[1572055441] 'agreement among raft nodes before linearized reading'  (duration: 401.608627ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:55:01.025196Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:55:00.623324Z","time spent":"401.860261ms","remote":"127.0.0.1:58214","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-15T18:55:01.025119Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T18:55:00.612432Z","time spent":"412.627923ms","remote":"127.0.0.1:58536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1298,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-t7m4w\" mod_revision:426 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-t7m4w\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-t7m4w\" > >"}
	{"level":"info","ts":"2024-04-15T18:55:05.706563Z","caller":"traceutil/trace.go:171","msg":"trace[93017411] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"236.952879ms","start":"2024-04-15T18:55:05.46952Z","end":"2024-04-15T18:55:05.706473Z","steps":["trace[93017411] 'process raft request'  (duration: 236.491117ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T18:55:06.891188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.781227ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873777233169007780 > lease_revoke:<id:40898ee31bf0c446>","response":"size:28"}
	{"level":"info","ts":"2024-04-15T18:55:06.891291Z","caller":"traceutil/trace.go:171","msg":"trace[1043465017] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:463; }","duration":"266.676415ms","start":"2024-04-15T18:55:06.624598Z","end":"2024-04-15T18:55:06.891274Z","steps":["trace[1043465017] 'read index received'  (duration: 56.307µs)","trace[1043465017] 'applied index is now lower than readState.Index'  (duration: 266.615807ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T18:55:06.891382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.768028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T18:55:06.891423Z","caller":"traceutil/trace.go:171","msg":"trace[152076260] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:440; }","duration":"266.841738ms","start":"2024-04-15T18:55:06.624566Z","end":"2024-04-15T18:55:06.891408Z","steps":["trace[152076260] 'agreement among raft nodes before linearized reading'  (duration: 266.757627ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T18:55:09.987361Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-15T18:55:09.988272Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-176700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"warn","ts":"2024-04-15T18:55:09.988397Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:55:09.994593Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:55:10.011296Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-15T18:55:10.011351Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-15T18:55:10.011428Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2024-04-15T18:55:10.099396Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-04-15T18:55:10.099668Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2024-04-15T18:55:10.0997Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-176700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	
	==> kernel <==
	 18:56:44 up  6:37,  0 users,  load average: 7.62, 7.35, 4.81
	Linux pause-176700 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [13fbd1683bed] <==
	E0415 18:56:07.895398       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0415 18:56:07.895512       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0415 18:56:07.895560       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0415 18:56:07.895589       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0415 18:56:07.895604       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0415 18:56:07.896973       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0415 18:56:07.897142       1 trace.go:236] Trace[444373023]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:31b25cc8-4383-4b5d-b0c9-eb7b3c6253da,client:192.168.103.2,api-group:,api-version:v1,name:coredns,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:POST (15-Apr-2024 18:56:02.298) (total time: 5598ms):
	Trace[444373023]: ---"Write to database call failed" len:156,err:Timeout: request did not complete within requested timeout - context canceled 5594ms (18:56:07.893)
	Trace[444373023]: [5.598375197s] [5.598375197s] END
	E0415 18:56:07.897194       1 timeout.go:142] post-timeout activity - time-elapsed: 3.807202ms, POST "/api/v1/namespaces/kube-system/serviceaccounts/coredns/token" result: <nil>
	E0415 18:56:07.897143       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0415 18:56:07.897205       1 trace.go:236] Trace[715867467]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:47430b5a-d74c-42e9-bc93-d54c36112bc7,client:192.168.103.2,api-group:coordination.k8s.io,api-version:v1,name:pause-176700,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-176700,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:PUT (15-Apr-2024 18:56:01.287) (total time: 6609ms):
	Trace[715867467]: ["GuaranteedUpdate etcd3" audit-id:47430b5a-d74c-42e9-bc93-d54c36112bc7,key:/leases/kube-node-lease/pause-176700,type:*coordination.Lease,resource:leases.coordination.k8s.io 6609ms (18:56:01.287)
	Trace[715867467]:  ---"Txn call failed" err:context canceled 6604ms (18:56:07.893)]
	Trace[715867467]: [6.609663374s] [6.609663374s] END
	E0415 18:56:07.897271       1 timeout.go:142] post-timeout activity - time-elapsed: 3.765797ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-176700" result: <nil>
	I0415 18:56:07.897339       1 trace.go:236] Trace[1519021224]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:67e718f7-2ebe-4de5-8c5d-e3dddf11db1d,client:192.168.103.2,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:POST (15-Apr-2024 18:56:01.288) (total time: 6609ms):
	Trace[1519021224]: ["Create etcd3" audit-id:67e718f7-2ebe-4de5-8c5d-e3dddf11db1d,key:/minions/pause-176700,type:*core.Node,resource:nodes 6607ms (18:56:01.289)
	Trace[1519021224]:  ---"Txn call failed" err:context canceled 6603ms (18:56:07.893)]
	Trace[1519021224]: [6.609295625s] [6.609295625s] END
	E0415 18:56:07.897522       1 timeout.go:142] post-timeout activity - time-elapsed: 4.132645ms, POST "/api/v1/nodes" result: <nil>
	E0415 18:56:07.898593       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0415 18:56:07.898698       1 trace.go:236] Trace[1474810988]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fdeeb321-f668-4f94-90f1-c19dd0ce1816,client:192.168.103.2,api-group:storage.k8s.io,api-version:v1,name:pause-176700,subresource:,namespace:,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes/pause-176700,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:GET (15-Apr-2024 18:56:01.289) (total time: 6609ms):
	Trace[1474810988]: [6.609644669s] [6.609644669s] END
	E0415 18:56:07.898754       1 timeout.go:142] post-timeout activity - time-elapsed: 5.178083ms, GET "/apis/storage.k8s.io/v1/csinodes/pause-176700" result: <nil>
	
	
	==> kube-apiserver [c8642e8133ee] <==
	W0415 18:55:19.062937       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.078006       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.079511       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.150940       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.157341       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.161347       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.233773       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.313633       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.367495       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.395208       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.418851       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.457517       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.465844       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.485717       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.501080       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.642785       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.668462       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.694148       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.704650       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.759647       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.794161       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.824383       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.824471       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.901548       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0415 18:55:19.945807       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c054de8e645] <==
	I0415 18:55:49.887533       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0415 18:55:49.887545       1 shared_informer.go:318] Caches are synced for deployment
	I0415 18:55:49.887558       1 shared_informer.go:318] Caches are synced for stateful set
	I0415 18:55:49.890757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="5.784389ms"
	I0415 18:55:49.890835       1 shared_informer.go:318] Caches are synced for cronjob
	I0415 18:55:49.885331       1 shared_informer.go:318] Caches are synced for TTL
	I0415 18:55:49.888518       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0415 18:55:49.888863       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-176700"
	I0415 18:55:49.887571       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0415 18:55:49.891586       1 event.go:376] "Event occurred" object="pause-176700" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-176700 event: Registered Node pause-176700 in Controller"
	I0415 18:55:49.893477       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0415 18:55:49.897845       1 range_allocator.go:174] "Sending events to api server"
	I0415 18:55:49.898774       1 shared_informer.go:318] Caches are synced for HPA
	I0415 18:55:49.899183       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0415 18:55:49.899568       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0415 18:55:49.899587       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0415 18:55:49.982807       1 shared_informer.go:318] Caches are synced for disruption
	I0415 18:55:50.038339       1 shared_informer.go:318] Caches are synced for persistent volume
	I0415 18:55:50.043859       1 shared_informer.go:318] Caches are synced for attach detach
	I0415 18:55:50.083052       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:55:50.083077       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:55:50.083105       1 shared_informer.go:318] Caches are synced for PV protection
	I0415 18:55:50.389806       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:55:50.457331       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:55:50.457481       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [626d700a09bd] <==
	I0415 18:54:25.063571       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 18:54:25.409570       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:54:25.409880       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0415 18:54:25.412833       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 18:54:25.828420       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7lm47"
	I0415 18:54:25.925319       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0415 18:54:26.096548       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-rc2w4"
	I0415 18:54:26.124006       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-t2b6w"
	I0415 18:54:29.803494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="3.876656269s"
	I0415 18:54:30.244417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="440.783004ms"
	I0415 18:54:30.244733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="86.912µs"
	I0415 18:54:30.244792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="32.604µs"
	I0415 18:54:30.435565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="105.014µs"
	I0415 18:54:30.766824       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0415 18:54:30.834363       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-rc2w4"
	I0415 18:54:30.876808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="110.204878ms"
	I0415 18:54:30.918585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="41.597291ms"
	I0415 18:54:30.919244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="87.412µs"
	I0415 18:54:33.811432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="125.916µs"
	I0415 18:54:34.861592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="109.414µs"
	I0415 18:54:47.777411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="85.309µs"
	I0415 18:54:48.144769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.108µs"
	I0415 18:54:48.164778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="83.708µs"
	I0415 18:55:01.026520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="419.558364ms"
	I0415 18:55:01.027176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="142.32µs"
	
	
	==> kube-proxy [14ab20c2ac93] <==
	I0415 18:54:32.732851       1 server_others.go:72] "Using iptables proxy"
	I0415 18:54:32.811467       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	I0415 18:54:32.934728       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0415 18:54:32.934801       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:54:32.941764       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0415 18:54:32.941873       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0415 18:54:32.941944       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:54:32.943144       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:54:32.993347       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:54:32.998080       1 config.go:188] "Starting service config controller"
	I0415 18:54:32.998288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:54:32.998247       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:54:32.998361       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:54:32.998272       1 config.go:315] "Starting node config controller"
	I0415 18:54:32.998420       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:54:33.100308       1 shared_informer.go:318] Caches are synced for service config
	I0415 18:54:33.100509       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:54:33.100245       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [d1480622dacd] <==
	I0415 18:55:30.888011       1 server_others.go:72] "Using iptables proxy"
	E0415 18:55:30.892635       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-176700\": dial tcp 192.168.103.2:8443: connect: connection refused"
	E0415 18:55:32.093358       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-176700\": dial tcp 192.168.103.2:8443: connect: connection refused"
	I0415 18:55:37.402305       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	I0415 18:55:37.688343       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0415 18:55:37.688481       1 server_others.go:168] "Using iptables Proxier"
	I0415 18:55:37.693348       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0415 18:55:37.693512       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0415 18:55:37.693722       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 18:55:37.694726       1 server.go:865] "Version info" version="v1.29.3"
	I0415 18:55:37.694866       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:55:37.696521       1 config.go:188] "Starting service config controller"
	I0415 18:55:37.700793       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 18:55:37.697874       1 config.go:315] "Starting node config controller"
	I0415 18:55:37.698529       1 config.go:97] "Starting endpoint slice config controller"
	I0415 18:55:37.702158       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 18:55:37.702172       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 18:55:37.702244       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 18:55:37.702282       1 shared_informer.go:318] Caches are synced for node config
	I0415 18:55:37.803268       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [92d4582c6a2b] <==
	W0415 18:54:08.526779       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 18:54:08.526969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 18:54:08.538352       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 18:54:08.538515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 18:54:08.543523       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 18:54:08.543623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 18:54:08.556758       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 18:54:08.556864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 18:54:08.597864       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 18:54:08.598004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 18:54:08.641391       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 18:54:08.641628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 18:54:08.713738       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 18:54:08.713930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 18:54:08.719673       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 18:54:08.719806       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 18:54:08.819969       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 18:54:08.820128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 18:54:08.955125       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 18:54:08.955223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0415 18:54:11.113360       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 18:55:09.988584       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0415 18:55:09.988779       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0415 18:55:09.988978       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0415 18:55:09.989587       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a5252c26ecd2] <==
	I0415 18:55:32.107108       1 serving.go:380] Generated self-signed cert in-memory
	W0415 18:55:37.201859       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0415 18:55:37.202115       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0415 18:55:37.202140       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0415 18:55:37.202154       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0415 18:55:37.386269       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0415 18:55:37.386318       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 18:55:37.391896       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0415 18:55:37.392078       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 18:55:37.395262       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0415 18:55:37.402851       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0415 18:55:37.494167       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.096157    6411 topology_manager.go:215] "Topology Admit Handler" podUID="c890678f-54c9-40e0-99c6-1ec4aa396a04" podNamespace="kube-system" podName="kube-proxy-7lm47"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.096514    6411 topology_manager.go:215] "Topology Admit Handler" podUID="5488f80b-0761-4c97-a2c7-08aeca6362d0" podNamespace="kube-system" podName="coredns-76f75df574-t2b6w"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.195584    6411 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.296723    6411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c890678f-54c9-40e0-99c6-1ec4aa396a04-lib-modules\") pod \"kube-proxy-7lm47\" (UID: \"c890678f-54c9-40e0-99c6-1ec4aa396a04\") " pod="kube-system/kube-proxy-7lm47"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: I0415 18:56:02.296891    6411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c890678f-54c9-40e0-99c6-1ec4aa396a04-xtables-lock\") pod \"kube-proxy-7lm47\" (UID: \"c890678f-54c9-40e0-99c6-1ec4aa396a04\") " pod="kube-system/kube-proxy-7lm47"
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.296800    6411 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.297214    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy podName:c890678f-54c9-40e0-99c6-1ec4aa396a04 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:02.797188472 +0000 UTC m=+2.001620944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy") pod "kube-proxy-7lm47" (UID: "c890678f-54c9-40e0-99c6-1ec4aa396a04") : object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.297221    6411 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.297349    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume podName:5488f80b-0761-4c97-a2c7-08aeca6362d0 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:02.797336291 +0000 UTC m=+2.001768863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume") pod "coredns-76f75df574-t2b6w" (UID: "5488f80b-0761-4c97-a2c7-08aeca6362d0") : object "kube-system"/"coredns" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.801643    6411 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.801751    6411 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.801875    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy podName:c890678f-54c9-40e0-99c6-1ec4aa396a04 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:03.801852561 +0000 UTC m=+3.006285033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy") pod "kube-proxy-7lm47" (UID: "c890678f-54c9-40e0-99c6-1ec4aa396a04") : object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:02 pause-176700 kubelet[6411]: E0415 18:56:02.801898    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume podName:5488f80b-0761-4c97-a2c7-08aeca6362d0 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:03.801890266 +0000 UTC m=+3.006322838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume") pod "coredns-76f75df574-t2b6w" (UID: "5488f80b-0761-4c97-a2c7-08aeca6362d0") : object "kube-system"/"coredns" not registered
	Apr 15 18:56:03 pause-176700 kubelet[6411]: E0415 18:56:03.812784    6411 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:03 pause-176700 kubelet[6411]: E0415 18:56:03.812923    6411 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 15 18:56:03 pause-176700 kubelet[6411]: E0415 18:56:03.812946    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy podName:c890678f-54c9-40e0-99c6-1ec4aa396a04 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:05.812924871 +0000 UTC m=+5.017357343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy") pod "kube-proxy-7lm47" (UID: "c890678f-54c9-40e0-99c6-1ec4aa396a04") : object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:03 pause-176700 kubelet[6411]: E0415 18:56:03.812989    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume podName:5488f80b-0761-4c97-a2c7-08aeca6362d0 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:05.812970577 +0000 UTC m=+5.017403149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume") pod "coredns-76f75df574-t2b6w" (UID: "5488f80b-0761-4c97-a2c7-08aeca6362d0") : object "kube-system"/"coredns" not registered
	Apr 15 18:56:05 pause-176700 kubelet[6411]: E0415 18:56:05.833975    6411 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:05 pause-176700 kubelet[6411]: E0415 18:56:05.834139    6411 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 15 18:56:05 pause-176700 kubelet[6411]: E0415 18:56:05.834424    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume podName:5488f80b-0761-4c97-a2c7-08aeca6362d0 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:09.834353095 +0000 UTC m=+9.038785567 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5488f80b-0761-4c97-a2c7-08aeca6362d0-config-volume") pod "coredns-76f75df574-t2b6w" (UID: "5488f80b-0761-4c97-a2c7-08aeca6362d0") : object "kube-system"/"coredns" not registered
	Apr 15 18:56:05 pause-176700 kubelet[6411]: E0415 18:56:05.834449    6411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy podName:c890678f-54c9-40e0-99c6-1ec4aa396a04 nodeName:}" failed. No retries permitted until 2024-04-15 18:56:09.834440107 +0000 UTC m=+9.038872579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c890678f-54c9-40e0-99c6-1ec4aa396a04-kube-proxy") pod "kube-proxy-7lm47" (UID: "c890678f-54c9-40e0-99c6-1ec4aa396a04") : object "kube-system"/"kube-proxy" not registered
	Apr 15 18:56:07 pause-176700 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Apr 15 18:56:07 pause-176700 kubelet[6411]: I0415 18:56:07.811899    6411 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Apr 15 18:56:07 pause-176700 systemd[1]: kubelet.service: Deactivated successfully.
	Apr 15 18:56:07 pause-176700 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:56:32.057933   11408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-176700 -n pause-176700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-176700 -n pause-176700: exit status 2 (1.5961296s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:56:45.436914   13172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-176700" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/PauseAgain (45.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (422.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-075400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0415 19:10:30.842796   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-008800\client.crt: The system cannot find the path specified.
E0415 19:10:32.685353   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:37.131325   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 19:10:43.951171   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:48.913919   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:48.929904   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:48.945917   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:48.976904   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:49.022937   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:49.116802   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:49.290909   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:49.620920   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:50.267859   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:51.564587   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:53.168861   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:54.133374   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:10:54.488147   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-008800\client.crt: The system cannot find the path specified.
E0415 19:10:59.267010   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:11:09.509773   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-075400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: exit status 102 (6m54.6656789s)

                                                
                                                
-- stdout --
	* [old-k8s-version-075400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-075400" primary control-plane node in "old-k8s-version-075400" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Restarting existing docker container for "old-k8s-version-075400" ...
	* Preparing Kubernetes v1.20.0 on Docker 26.0.1 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-075400 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:10:28.122813    6796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 19:10:28.202427    6796 out.go:291] Setting OutFile to fd 2024 ...
	I0415 19:10:28.202725    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:10:28.202725    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:10:28.202725    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:10:28.228159    6796 out.go:298] Setting JSON to false
	I0415 19:10:28.233398    6796 start.go:129] hostinfo: {"hostname":"minikube4","uptime":24698,"bootTime":1713183530,"procs":209,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 19:10:28.233455    6796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 19:10:28.237062    6796 out.go:177] * [old-k8s-version-075400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 19:10:28.243075    6796 notify.go:220] Checking for updates...
	I0415 19:10:28.245898    6796 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 19:10:28.249548    6796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 19:10:28.255910    6796 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 19:10:28.259351    6796 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 19:10:28.262350    6796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 19:10:28.265220    6796 config.go:182] Loaded profile config "old-k8s-version-075400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 19:10:28.271083    6796 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0415 19:10:28.273405    6796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 19:10:28.624297    6796 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 19:10:28.641289    6796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 19:10:29.046699    6796 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:true NGoroutines:97 SystemTime:2024-04-15 19:10:28.988464336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersio
n:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 19:10:29.051746    6796 out.go:177] * Using the docker driver based on existing profile
	I0415 19:10:29.053968    6796 start.go:297] selected driver: docker
	I0415 19:10:29.054026    6796 start.go:901] validating driver "docker" against &{Name:old-k8s-version-075400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-075400 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:10:29.054188    6796 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 19:10:29.138733    6796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 19:10:29.551071    6796 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:true NGoroutines:97 SystemTime:2024-04-15 19:10:29.504262265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersio
n:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 19:10:29.552078    6796 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 19:10:29.552078    6796 cni.go:84] Creating CNI manager for ""
	I0415 19:10:29.552078    6796 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 19:10:29.552078    6796 start.go:340] cluster config:
	{Name:old-k8s-version-075400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-075400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:10:29.556089    6796 out.go:177] * Starting "old-k8s-version-075400" primary control-plane node in "old-k8s-version-075400" cluster
	I0415 19:10:29.559343    6796 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 19:10:29.561085    6796 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 19:10:29.566080    6796 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 19:10:29.566080    6796 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 19:10:29.566080    6796 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 19:10:29.566080    6796 cache.go:56] Caching tarball of preloaded images
	I0415 19:10:29.567076    6796 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 19:10:29.567076    6796 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 19:10:29.567076    6796 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\config.json ...
	I0415 19:10:29.791047    6796 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 19:10:29.791047    6796 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 19:10:29.791047    6796 cache.go:194] Successfully downloaded all kic artifacts
	I0415 19:10:29.791047    6796 start.go:360] acquireMachinesLock for old-k8s-version-075400: {Name:mkcdea966e78d4a7bab7a21c4515cb507937e792 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 19:10:29.791047    6796 start.go:364] duration metric: took 0s to acquireMachinesLock for "old-k8s-version-075400"
	I0415 19:10:29.791047    6796 start.go:96] Skipping create...Using existing machine configuration
	I0415 19:10:29.791047    6796 fix.go:54] fixHost starting: 
	I0415 19:10:29.808852    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-075400 --format={{.State.Status}}
	I0415 19:10:30.030707    6796 fix.go:112] recreateIfNeeded on old-k8s-version-075400: state=Stopped err=<nil>
	W0415 19:10:30.030707    6796 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 19:10:30.034312    6796 out.go:177] * Restarting existing docker container for "old-k8s-version-075400" ...
	I0415 19:10:30.056619    6796 cli_runner.go:164] Run: docker start old-k8s-version-075400
	I0415 19:10:30.857802    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-075400 --format={{.State.Status}}
	I0415 19:10:31.121426    6796 kic.go:430] container "old-k8s-version-075400" state is running.
	I0415 19:10:31.139414    6796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-075400
	I0415 19:10:31.410441    6796 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\config.json ...
	I0415 19:10:31.413430    6796 machine.go:94] provisionDockerMachine start ...
	I0415 19:10:31.429440    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:31.703455    6796 main.go:141] libmachine: Using SSH client type: native
	I0415 19:10:31.704457    6796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0415 19:10:31.704457    6796 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 19:10:31.709442    6796 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0415 19:10:34.940183    6796 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-075400
	
	I0415 19:10:34.940183    6796 ubuntu.go:169] provisioning hostname "old-k8s-version-075400"
	I0415 19:10:34.958192    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:35.232198    6796 main.go:141] libmachine: Using SSH client type: native
	I0415 19:10:35.233206    6796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0415 19:10:35.233206    6796 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-075400 && echo "old-k8s-version-075400" | sudo tee /etc/hostname
	I0415 19:10:35.489828    6796 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-075400
	
	I0415 19:10:35.504849    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:35.766849    6796 main.go:141] libmachine: Using SSH client type: native
	I0415 19:10:35.766849    6796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0415 19:10:35.767860    6796 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-075400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-075400/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-075400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 19:10:35.988861    6796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:10:35.988861    6796 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0415 19:10:35.988861    6796 ubuntu.go:177] setting up certificates
	I0415 19:10:35.988861    6796 provision.go:84] configureAuth start
	I0415 19:10:36.003865    6796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-075400
	I0415 19:10:36.288883    6796 provision.go:143] copyHostCerts
	I0415 19:10:36.289882    6796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0415 19:10:36.289882    6796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0415 19:10:36.290878    6796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0415 19:10:36.291879    6796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0415 19:10:36.291879    6796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0415 19:10:36.291879    6796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 19:10:36.294101    6796 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0415 19:10:36.294101    6796 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0415 19:10:36.294885    6796 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 19:10:36.295880    6796 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-075400 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-075400]
	I0415 19:10:36.818672    6796 provision.go:177] copyRemoteCerts
	I0415 19:10:36.846666    6796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 19:10:36.862673    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:37.126327    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:10:37.258323    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0415 19:10:37.314356    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0415 19:10:37.381044    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 19:10:37.440041    6796 provision.go:87] duration metric: took 1.4511133s to configureAuth
	I0415 19:10:37.440041    6796 ubuntu.go:193] setting minikube options for container-runtime
	I0415 19:10:37.440041    6796 config.go:182] Loaded profile config "old-k8s-version-075400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 19:10:37.456034    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:37.719079    6796 main.go:141] libmachine: Using SSH client type: native
	I0415 19:10:37.720035    6796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0415 19:10:37.720035    6796 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 19:10:37.945066    6796 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0415 19:10:37.945066    6796 ubuntu.go:71] root file system type: overlay
	I0415 19:10:37.945066    6796 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 19:10:37.964064    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:38.243848    6796 main.go:141] libmachine: Using SSH client type: native
	I0415 19:10:38.244817    6796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0415 19:10:38.244817    6796 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 19:10:38.494842    6796 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 19:10:38.512836    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:38.763857    6796 main.go:141] libmachine: Using SSH client type: native
	I0415 19:10:38.763857    6796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0415 19:10:38.763857    6796 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 19:10:39.005290    6796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:10:39.005290    6796 machine.go:97] duration metric: took 7.5915067s to provisionDockerMachine
	I0415 19:10:39.005290    6796 start.go:293] postStartSetup for "old-k8s-version-075400" (driver="docker")
	I0415 19:10:39.005290    6796 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 19:10:39.040676    6796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 19:10:39.058675    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:39.326203    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:10:39.503187    6796 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 19:10:39.514561    6796 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0415 19:10:39.514561    6796 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0415 19:10:39.514561    6796 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0415 19:10:39.514561    6796 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0415 19:10:39.514561    6796 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0415 19:10:39.515572    6796 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0415 19:10:39.516570    6796 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem -> 117482.pem in /etc/ssl/certs
	I0415 19:10:39.542556    6796 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 19:10:39.572543    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem --> /etc/ssl/certs/117482.pem (1708 bytes)
	I0415 19:10:39.650561    6796 start.go:296] duration metric: took 645.2416ms for postStartSetup
	I0415 19:10:39.673550    6796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:10:39.689552    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:39.959594    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:10:40.112576    6796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 19:10:40.135597    6796 fix.go:56] duration metric: took 10.3440689s for fixHost
	I0415 19:10:40.136592    6796 start.go:83] releasing machines lock for "old-k8s-version-075400", held for 10.3450638s
	I0415 19:10:40.156612    6796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-075400
	I0415 19:10:40.423590    6796 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 19:10:40.443592    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:40.447597    6796 ssh_runner.go:195] Run: cat /version.json
	I0415 19:10:40.467603    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:40.702603    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:10:40.732606    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:10:41.105339    6796 ssh_runner.go:195] Run: systemctl --version
	I0415 19:10:41.148383    6796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 19:10:41.189349    6796 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0415 19:10:41.214358    6796 start.go:438] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0415 19:10:41.236417    6796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0415 19:10:41.306341    6796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0415 19:10:41.359381    6796 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 19:10:41.359381    6796 start.go:494] detecting cgroup driver to use...
	I0415 19:10:41.359381    6796 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 19:10:41.359381    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:10:41.432390    6796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0415 19:10:41.479362    6796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 19:10:41.508363    6796 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 19:10:41.533371    6796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 19:10:41.584361    6796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:10:41.642375    6796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 19:10:41.693365    6796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:10:41.753410    6796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 19:10:41.796003    6796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 19:10:41.844022    6796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 19:10:41.881005    6796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 19:10:41.919079    6796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:10:42.129129    6796 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 19:10:42.340711    6796 start.go:494] detecting cgroup driver to use...
	I0415 19:10:42.340711    6796 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 19:10:42.362690    6796 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 19:10:42.386701    6796 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0415 19:10:42.404735    6796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:10:42.438998    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:10:42.491503    6796 ssh_runner.go:195] Run: which cri-dockerd
	I0415 19:10:42.522862    6796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 19:10:42.544480    6796 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 19:10:42.604812    6796 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 19:10:42.833710    6796 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 19:10:43.052183    6796 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 19:10:43.052183    6796 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 19:10:43.106178    6796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:10:43.323564    6796 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:10:44.634462    6796 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.3106752s)
	I0415 19:10:44.650670    6796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:10:44.729871    6796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:10:44.801874    6796 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 26.0.1 ...
	I0415 19:10:44.812895    6796 cli_runner.go:164] Run: docker exec -t old-k8s-version-075400 dig +short host.docker.internal
	I0415 19:10:45.137103    6796 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0415 19:10:45.153109    6796 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0415 19:10:45.164119    6796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:10:45.198258    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:10:45.390723    6796 kubeadm.go:877] updating cluster {Name:old-k8s-version-075400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-075400 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jen
kins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 19:10:45.390723    6796 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 19:10:45.402745    6796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 19:10:45.456222    6796 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0415 19:10:45.456222    6796 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0415 19:10:45.472482    6796 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 19:10:45.507217    6796 ssh_runner.go:195] Run: which lz4
	I0415 19:10:45.538659    6796 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0415 19:10:45.550667    6796 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 19:10:45.550667    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
	I0415 19:11:00.568520    6796 docker.go:649] duration metric: took 15.0463123s to copy over tarball
	I0415 19:11:00.583588    6796 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 19:11:05.201474    6796 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.6176719s)
	I0415 19:11:05.201474    6796 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 19:11:05.293947    6796 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0415 19:11:05.316001    6796 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2824 bytes)
	I0415 19:11:05.373341    6796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:11:05.544533    6796 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:11:15.513672    6796 ssh_runner.go:235] Completed: sudo systemctl restart docker: (9.9686753s)
	I0415 19:11:15.525782    6796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 19:11:15.572140    6796 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0415 19:11:15.572165    6796 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0415 19:11:15.572165    6796 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0415 19:11:15.596025    6796 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 19:11:15.603490    6796 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0415 19:11:15.603490    6796 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0415 19:11:15.606189    6796 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0415 19:11:15.606604    6796 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 19:11:15.612676    6796 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0415 19:11:15.616250    6796 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0415 19:11:15.616250    6796 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0415 19:11:15.622115    6796 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 19:11:15.626639    6796 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0415 19:11:15.626639    6796 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0415 19:11:15.627639    6796 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0415 19:11:15.630904    6796 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0415 19:11:15.630904    6796 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 19:11:15.640418    6796 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0415 19:11:15.652411    6796 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	W0415 19:11:15.720441    6796 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0415 19:11:15.797510    6796 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0415 19:11:15.891860    6796 image.go:187] authn lookup for registry.k8s.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0415 19:11:15.983861    6796 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0415 19:11:16.066568    6796 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0415 19:11:16.077572    6796 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0415 19:11:16.171645    6796 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0415 19:11:16.240962    6796 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0415 19:11:16.253991    6796 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	W0415 19:11:16.279846    6796 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0415 19:11:16.290845    6796 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0415 19:11:16.290845    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0415 19:11:16.290845    6796 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0415 19:11:16.291856    6796 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0415 19:11:16.308859    6796 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
	I0415 19:11:16.312859    6796 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0415 19:11:16.312859    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.7.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0415 19:11:16.312859    6796 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
	I0415 19:11:16.327855    6796 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
	I0415 19:11:16.333851    6796 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0415 19:11:16.341848    6796 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0415 19:11:16.341848    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.13-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0415 19:11:16.341848    6796 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0415 19:11:16.363860    6796 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
	I0415 19:11:16.372985    6796 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	W0415 19:11:16.403842    6796 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0415 19:11:16.421845    6796 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 19:11:16.423854    6796 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0415 19:11:16.431855    6796 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0415 19:11:16.431855    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0415 19:11:16.431855    6796 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0415 19:11:16.434856    6796 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0415 19:11:16.445861    6796 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0415 19:11:16.464857    6796 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0415 19:11:16.464857    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0415 19:11:16.464857    6796 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 19:11:16.478850    6796 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0415 19:11:16.490862    6796 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0415 19:11:16.534134    6796 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0415 19:11:16.558446    6796 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0415 19:11:16.608825    6796 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0415 19:11:16.609798    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0415 19:11:16.609798    6796 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0415 19:11:16.620784    6796 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0415 19:11:16.634791    6796 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0415 19:11:16.666800    6796 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0415 19:11:16.675795    6796 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0415 19:11:16.675795    6796 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0415 19:11:16.675795    6796 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0415 19:11:16.684795    6796 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0415 19:11:16.733784    6796 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0415 19:11:16.733784    6796 cache_images.go:92] duration metric: took 1.1615651s to LoadCachedImages
	W0415 19:11:16.733784    6796 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0: The system cannot find the file specified.
	I0415 19:11:16.733784    6796 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 docker true true} ...
	I0415 19:11:16.734792    6796 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-075400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-075400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 19:11:16.745801    6796 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 19:11:16.867106    6796 cni.go:84] Creating CNI manager for ""
	I0415 19:11:16.867150    6796 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 19:11:16.867241    6796 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 19:11:16.867241    6796 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-075400 NodeName:old-k8s-version-075400 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0415 19:11:16.867716    6796 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-075400"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 19:11:16.878719    6796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0415 19:11:16.895717    6796 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 19:11:16.906720    6796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 19:11:16.934405    6796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0415 19:11:16.969335    6796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 19:11:17.002277    6796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0415 19:11:17.053431    6796 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0415 19:11:17.067114    6796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:11:17.106036    6796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:11:17.291008    6796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:11:17.321959    6796 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400 for IP: 192.168.85.2
	I0415 19:11:17.321959    6796 certs.go:194] generating shared ca certs ...
	I0415 19:11:17.321959    6796 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:11:17.322941    6796 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0415 19:11:17.323690    6796 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0415 19:11:17.323690    6796 certs.go:256] generating profile certs ...
	I0415 19:11:17.324674    6796 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\client.key
	I0415 19:11:17.325109    6796 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\apiserver.key.dd26dda6
	I0415 19:11:17.325299    6796 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\proxy-client.key
	I0415 19:11:17.327676    6796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem (1338 bytes)
	W0415 19:11:17.328315    6796 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748_empty.pem, impossibly tiny 0 bytes
	I0415 19:11:17.328315    6796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0415 19:11:17.328958    6796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0415 19:11:17.329519    6796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 19:11:17.329890    6796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 19:11:17.330194    6796 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem (1708 bytes)
	I0415 19:11:17.331798    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 19:11:17.386421    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0415 19:11:17.433352    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 19:11:17.482982    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 19:11:17.549066    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0415 19:11:17.629496    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 19:11:17.718693    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 19:11:17.805896    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-075400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 19:11:17.867304    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem --> /usr/share/ca-certificates/11748.pem (1338 bytes)
	I0415 19:11:18.011481    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem --> /usr/share/ca-certificates/117482.pem (1708 bytes)
	I0415 19:11:18.069884    6796 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 19:11:18.115915    6796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 19:11:18.165400    6796 ssh_runner.go:195] Run: openssl version
	I0415 19:11:18.196135    6796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 19:11:18.248138    6796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:11:18.306138    6796 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:11:18.327132    6796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:11:18.361117    6796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 19:11:18.434610    6796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11748.pem && ln -fs /usr/share/ca-certificates/11748.pem /etc/ssl/certs/11748.pem"
	I0415 19:11:18.534615    6796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11748.pem
	I0415 19:11:18.546593    6796 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:52 /usr/share/ca-certificates/11748.pem
	I0415 19:11:18.561574    6796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11748.pem
	I0415 19:11:18.621467    6796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11748.pem /etc/ssl/certs/51391683.0"
	I0415 19:11:18.684030    6796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117482.pem && ln -fs /usr/share/ca-certificates/117482.pem /etc/ssl/certs/117482.pem"
	I0415 19:11:18.756041    6796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117482.pem
	I0415 19:11:18.816660    6796 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:52 /usr/share/ca-certificates/117482.pem
	I0415 19:11:18.833704    6796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117482.pem
	I0415 19:11:18.869659    6796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117482.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 19:11:18.954930    6796 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 19:11:19.037931    6796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0415 19:11:19.073944    6796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0415 19:11:19.136563    6796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0415 19:11:19.233665    6796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0415 19:11:19.339538    6796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0415 19:11:19.385092    6796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0415 19:11:19.415228    6796 kubeadm.go:391] StartCluster: {Name:old-k8s-version-075400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-075400 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkin
s.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:11:19.428237    6796 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 19:11:19.481873    6796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0415 19:11:19.522334    6796 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0415 19:11:19.522334    6796 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0415 19:11:19.522334    6796 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0415 19:11:19.543585    6796 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0415 19:11:19.616764    6796 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0415 19:11:19.634369    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:11:19.842549    6796 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-075400" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 19:11:19.844547    6796 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-075400" cluster setting kubeconfig missing "old-k8s-version-075400" context setting]
	I0415 19:11:19.847629    6796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:11:19.880514    6796 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0415 19:11:20.009546    6796 kubeadm.go:624] The running cluster does not require reconfiguration: 127.0.0.1
	I0415 19:11:20.009602    6796 kubeadm.go:591] duration metric: took 487.2449ms to restartPrimaryControlPlane
	I0415 19:11:20.009602    6796 kubeadm.go:393] duration metric: took 594.3462ms to StartCluster
	I0415 19:11:20.009602    6796 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:11:20.011213    6796 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 19:11:20.018670    6796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:11:20.020342    6796 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 19:11:20.024147    6796 out.go:177] * Verifying Kubernetes components...
	I0415 19:11:20.020342    6796 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 19:11:20.020342    6796 config.go:182] Loaded profile config "old-k8s-version-075400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0415 19:11:20.024147    6796 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-075400"
	I0415 19:11:20.024147    6796 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-075400"
	W0415 19:11:20.024147    6796 addons.go:243] addon storage-provisioner should already be in state true
	I0415 19:11:20.024147    6796 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-075400"
	I0415 19:11:20.024147    6796 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-075400"
	I0415 19:11:20.024147    6796 host.go:66] Checking if "old-k8s-version-075400" exists ...
	I0415 19:11:20.024712    6796 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-075400"
	I0415 19:11:20.027935    6796 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-075400"
	W0415 19:11:20.027935    6796 addons.go:243] addon metrics-server should already be in state true
	I0415 19:11:20.028934    6796 host.go:66] Checking if "old-k8s-version-075400" exists ...
	I0415 19:11:20.024712    6796 addons.go:69] Setting dashboard=true in profile "old-k8s-version-075400"
	I0415 19:11:20.029928    6796 addons.go:234] Setting addon dashboard=true in "old-k8s-version-075400"
	W0415 19:11:20.030922    6796 addons.go:243] addon dashboard should already be in state true
	I0415 19:11:20.030922    6796 host.go:66] Checking if "old-k8s-version-075400" exists ...
	I0415 19:11:20.054518    6796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:11:20.059571    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-075400 --format={{.State.Status}}
	I0415 19:11:20.060520    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-075400 --format={{.State.Status}}
	I0415 19:11:20.062520    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-075400 --format={{.State.Status}}
	I0415 19:11:20.067232    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-075400 --format={{.State.Status}}
	I0415 19:11:20.288112    6796 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0415 19:11:20.290138    6796 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0415 19:11:20.291120    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0415 19:11:20.297110    6796 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 19:11:20.299111    6796 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:11:20.299111    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 19:11:20.300111    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:11:20.320113    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:11:20.328115    6796 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-075400"
	W0415 19:11:20.329117    6796 addons.go:243] addon default-storageclass should already be in state true
	I0415 19:11:20.329117    6796 host.go:66] Checking if "old-k8s-version-075400" exists ...
	I0415 19:11:20.339142    6796 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0415 19:11:20.343113    6796 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0415 19:11:20.345126    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0415 19:11:20.345126    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0415 19:11:20.363949    6796 cli_runner.go:164] Run: docker container inspect old-k8s-version-075400 --format={{.State.Status}}
	I0415 19:11:20.364484    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:11:20.537110    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:11:20.549087    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:11:20.563080    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:11:20.577090    6796 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 19:11:20.577090    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 19:11:20.586085    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:11:20.783922    6796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-075400\id_rsa Username:docker}
	I0415 19:11:20.838220    6796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:11:21.046827    6796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-075400
	I0415 19:11:21.117720    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0415 19:11:21.117720    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0415 19:11:21.238227    6796 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-075400" to be "Ready" ...
	I0415 19:11:21.241521    6796 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0415 19:11:21.241521    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0415 19:11:21.250145    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:11:21.321559    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0415 19:11:21.321559    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0415 19:11:21.520900    6796 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0415 19:11:21.521449    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0415 19:11:21.525323    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0415 19:11:21.525368    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0415 19:11:21.540743    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 19:11:21.721662    6796 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 19:11:21.721662    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0415 19:11:21.728656    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0415 19:11:21.728656    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0415 19:11:22.107519    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0415 19:11:22.107519    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0415 19:11:22.117535    6796 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:22.117535    6796 retry.go:31] will retry after 168.472816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:22.128527    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 19:11:22.305122    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0415 19:11:22.316579    6796 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:22.316579    6796 retry.go:31] will retry after 159.499792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:22.318581    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0415 19:11:22.318581    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0415 19:11:22.490217    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0415 19:11:22.609916    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0415 19:11:22.609916    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0415 19:11:22.818368    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0415 19:11:22.818368    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0415 19:11:23.014023    6796 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:23.014136    6796 retry.go:31] will retry after 223.148737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0415 19:11:23.113160    6796 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:23.113160    6796 retry.go:31] will retry after 468.657984ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:23.117164    6796 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0415 19:11:23.117164    6796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0415 19:11:23.257772    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0415 19:11:23.313761    6796 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:23.313761    6796 retry.go:31] will retry after 416.092978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0415 19:11:23.338291    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0415 19:11:23.604481    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:11:23.747344    6796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0415 19:11:32.816723    6796 node_ready.go:49] node "old-k8s-version-075400" has status "Ready":"True"
	I0415 19:11:32.816723    6796 node_ready.go:38] duration metric: took 11.5779585s for node "old-k8s-version-075400" to be "Ready" ...
	I0415 19:11:32.816723    6796 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:11:33.128685    6796 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-b2jzn" in "kube-system" namespace to be "Ready" ...
	I0415 19:11:33.512038    6796 pod_ready.go:92] pod "coredns-74ff55c5b-b2jzn" in "kube-system" namespace has status "Ready":"True"
	I0415 19:11:33.512038    6796 pod_ready.go:81] duration metric: took 383.3353ms for pod "coredns-74ff55c5b-b2jzn" in "kube-system" namespace to be "Ready" ...
	I0415 19:11:33.512038    6796 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-075400" in "kube-system" namespace to be "Ready" ...
	I0415 19:11:33.816919    6796 pod_ready.go:92] pod "etcd-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"True"
	I0415 19:11:33.816919    6796 pod_ready.go:81] duration metric: took 304.8668ms for pod "etcd-old-k8s-version-075400" in "kube-system" namespace to be "Ready" ...
	I0415 19:11:33.817917    6796 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-075400" in "kube-system" namespace to be "Ready" ...
	I0415 19:11:35.432373    6796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.174035s)
	I0415 19:11:35.432373    6796 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-075400"
	I0415 19:11:36.015233    6796 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:37.022961    6796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.6840353s)
	I0415 19:11:37.022961    6796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.4178571s)
	I0415 19:11:37.022961    6796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (13.2750015s)
	I0415 19:11:37.026190    6796 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-075400 addons enable metrics-server
	
	I0415 19:11:37.223303    6796 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0415 19:11:37.231627    6796 addons.go:505] duration metric: took 17.2102284s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0415 19:11:38.421823    6796 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:40.839671    6796 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:42.852694    6796 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:45.340824    6796 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:46.840260    6796 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"True"
	I0415 19:11:46.840260    6796 pod_ready.go:81] duration metric: took 13.0217378s for pod "kube-apiserver-old-k8s-version-075400" in "kube-system" namespace to be "Ready" ...
	I0415 19:11:46.840260    6796 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace to be "Ready" ...
	I0415 19:11:48.867306    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:51.367765    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:53.871743    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:56.053771    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:11:58.360374    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:00.377166    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:02.876404    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:05.361966    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:07.365865    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:09.369967    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:11.867145    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:14.665426    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:16.861713    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:18.931913    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:21.365904    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:23.366682    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:25.368145    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:27.865530    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:29.876939    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:32.368844    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:34.381245    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:36.873508    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:38.873939    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:40.875126    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:43.359737    6796 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:45.369299    6796 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"True"
	I0415 19:12:45.369299    6796 pod_ready.go:81] duration metric: took 58.5263235s for pod "kube-controller-manager-old-k8s-version-075400" in "kube-system" namespace to be "Ready" ...
	I0415 19:12:45.369299    6796 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-td2vz" in "kube-system" namespace to be "Ready" ...
	I0415 19:12:45.381305    6796 pod_ready.go:92] pod "kube-proxy-td2vz" in "kube-system" namespace has status "Ready":"True"
	I0415 19:12:45.381305    6796 pod_ready.go:81] duration metric: took 12.0055ms for pod "kube-proxy-td2vz" in "kube-system" namespace to be "Ready" ...
	I0415 19:12:45.381305    6796 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-075400" in "kube-system" namespace to be "Ready" ...
	I0415 19:12:47.411214    6796 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:49.414050    6796 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:51.908481    6796 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:54.038419    6796 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:56.398878    6796 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"False"
	I0415 19:12:58.419294    6796 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-075400" in "kube-system" namespace has status "Ready":"True"
	I0415 19:12:58.419294    6796 pod_ready.go:81] duration metric: took 13.0373841s for pod "kube-scheduler-old-k8s-version-075400" in "kube-system" namespace to be "Ready" ...
	I0415 19:12:58.419294    6796 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace to be "Ready" ...
	I0415 19:13:00.437932    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:02.441058    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:04.947042    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:07.438278    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:09.441445    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:11.448916    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:13.951161    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:16.458956    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:18.952063    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:21.442512    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:23.447806    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:25.944409    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:27.954026    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:30.442627    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:32.444478    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:34.448514    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:36.946568    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:39.442244    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:41.948115    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:43.950255    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:46.443104    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:48.949445    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:51.449765    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:53.939564    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:56.011722    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:13:58.443826    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:00.448807    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:02.451601    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:04.941212    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:06.949323    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:08.954428    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:11.448088    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:13.943993    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:15.947374    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:17.967255    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:20.443973    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:22.451570    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:24.947408    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:26.952104    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:29.441428    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:31.442304    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:33.449531    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:35.942155    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:37.948315    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:39.955958    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:42.442485    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:44.442786    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:46.458265    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:48.943396    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:50.958259    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:53.447867    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:55.449699    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:57.455410    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:14:59.956660    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:02.444757    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:04.447285    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:06.449465    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:08.956143    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:11.442448    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:13.454005    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:15.949493    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:23.278691    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:25.441578    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:27.455400    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:29.958880    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:32.442467    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:34.464186    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:36.944252    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:38.955923    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:41.445418    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:43.445631    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:45.447092    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:47.451298    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:49.460020    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:51.942326    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:53.947351    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:55.954415    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:15:58.456534    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:00.967698    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:03.450686    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:05.459976    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:07.953591    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:09.955641    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:11.959885    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:14.444421    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:16.452735    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:18.956452    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:21.445762    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:23.454439    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:25.961725    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:28.450028    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:30.456040    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:32.460940    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:34.959134    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:37.454839    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:39.456744    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:41.955538    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:44.449962    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:46.459052    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:48.465965    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:50.952175    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:52.955439    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:54.965930    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:57.463789    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:58.452616    6796 pod_ready.go:81] duration metric: took 4m0.0222344s for pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace to be "Ready" ...
	E0415 19:16:58.452616    6796 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0415 19:16:58.452616    6796 pod_ready.go:38] duration metric: took 5m25.6208335s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:16:58.452616    6796 api_server.go:52] waiting for apiserver process to appear ...
	I0415 19:16:58.462643    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 19:16:58.524624    6796 logs.go:276] 2 containers: [57985fd9aaa0 625287f80046]
	I0415 19:16:58.535613    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 19:16:58.573620    6796 logs.go:276] 2 containers: [348005d1b56d 21a6d7992112]
	I0415 19:16:58.592634    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 19:16:58.649625    6796 logs.go:276] 2 containers: [11235e2fd801 3a281f046968]
	I0415 19:16:58.659623    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 19:16:58.706936    6796 logs.go:276] 2 containers: [8edd897fcde4 29ce0c645316]
	I0415 19:16:58.721950    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 19:16:58.767031    6796 logs.go:276] 2 containers: [99e1c3d6c49a 665d0d639f5b]
	I0415 19:16:58.776025    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 19:16:58.833396    6796 logs.go:276] 2 containers: [2a2a08cf9f78 83c733fc4b92]
	I0415 19:16:58.842398    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 19:16:58.893288    6796 logs.go:276] 0 containers: []
	W0415 19:16:58.893288    6796 logs.go:278] No container was found matching "kindnet"
	I0415 19:16:58.911260    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 19:16:58.958871    6796 logs.go:276] 2 containers: [4cc2abc051be 571b76208459]
	I0415 19:16:58.968875    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0415 19:16:59.018472    6796 logs.go:276] 1 containers: [3733b3515cc0]
	I0415 19:16:59.019501    6796 logs.go:123] Gathering logs for kube-apiserver [625287f80046] ...
	I0415 19:16:59.019501    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 625287f80046"
	I0415 19:16:59.151478    6796 logs.go:123] Gathering logs for coredns [3a281f046968] ...
	I0415 19:16:59.151478    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a281f046968"
	I0415 19:16:59.213229    6796 logs.go:123] Gathering logs for container status ...
	I0415 19:16:59.213229    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 19:16:59.293059    6796 logs.go:123] Gathering logs for kubelet ...
	I0415 19:16:59.293059    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 19:16:59.391148    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:38 old-k8s-version-075400 kubelet[1653]: E0415 19:11:38.208713    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.395137    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:38 old-k8s-version-075400 kubelet[1653]: E0415 19:11:38.923145    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.396136    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:40 old-k8s-version-075400 kubelet[1653]: E0415 19:11:40.139646    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.400137    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:55 old-k8s-version-075400 kubelet[1653]: E0415 19:11:55.520427    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.404126    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:00 old-k8s-version-075400 kubelet[1653]: E0415 19:12:00.414418    1653 pod_workers.go:191] Error syncing pod bc788811-26ce-487a-ba75-ce0fe2ecbb60 ("storage-provisioner_kube-system(bc788811-26ce-487a-ba75-ce0fe2ecbb60)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc788811-26ce-487a-ba75-ce0fe2ecbb60)"
	W0415 19:16:59.404126    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:08 old-k8s-version-075400 kubelet[1653]: E0415 19:12:08.317183    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.407131    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:19 old-k8s-version-075400 kubelet[1653]: E0415 19:12:19.120308    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.408196    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:19 old-k8s-version-075400 kubelet[1653]: E0415 19:12:19.884729    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.410132    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:21 old-k8s-version-075400 kubelet[1653]: E0415 19:12:21.368820    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.413121    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:31 old-k8s-version-075400 kubelet[1653]: E0415 19:12:31.804593    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.413121    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:33 old-k8s-version-075400 kubelet[1653]: E0415 19:12:33.314894    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.413121    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:45 old-k8s-version-075400 kubelet[1653]: E0415 19:12:45.315528    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.413121    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:46 old-k8s-version-075400 kubelet[1653]: E0415 19:12:46.316103    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.415773    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:58 old-k8s-version-075400 kubelet[1653]: E0415 19:12:58.858560    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.416364    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:00 old-k8s-version-075400 kubelet[1653]: E0415 19:13:00.312864    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.417062    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:09 old-k8s-version-075400 kubelet[1653]: E0415 19:13:09.312646    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.418835    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:14 old-k8s-version-075400 kubelet[1653]: E0415 19:13:14.423669    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.418835    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:21 old-k8s-version-075400 kubelet[1653]: E0415 19:13:21.309832    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.418835    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:25 old-k8s-version-075400 kubelet[1653]: E0415 19:13:25.310091    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.419840    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:34 old-k8s-version-075400 kubelet[1653]: E0415 19:13:34.311220    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.419840    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:39 old-k8s-version-075400 kubelet[1653]: E0415 19:13:39.309519    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.421848    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:47 old-k8s-version-075400 kubelet[1653]: E0415 19:13:47.774939    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.421848    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:54 old-k8s-version-075400 kubelet[1653]: E0415 19:13:54.307787    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.421848    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:00 old-k8s-version-075400 kubelet[1653]: E0415 19:14:00.324109    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:07 old-k8s-version-075400 kubelet[1653]: E0415 19:14:07.308728    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:11 old-k8s-version-075400 kubelet[1653]: E0415 19:14:11.308825    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:19 old-k8s-version-075400 kubelet[1653]: E0415 19:14:19.306081    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:24 old-k8s-version-075400 kubelet[1653]: E0415 19:14:24.306865    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:30 old-k8s-version-075400 kubelet[1653]: E0415 19:14:30.307010    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.423833    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:38 old-k8s-version-075400 kubelet[1653]: E0415 19:14:38.304934    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:45 old-k8s-version-075400 kubelet[1653]: E0415 19:14:45.363819    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:53 old-k8s-version-075400 kubelet[1653]: E0415 19:14:53.301644    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:57 old-k8s-version-075400 kubelet[1653]: E0415 19:14:57.304954    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:07 old-k8s-version-075400 kubelet[1653]: E0415 19:15:07.315392    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:11 old-k8s-version-075400 kubelet[1653]: E0415 19:15:11.301274    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.426829    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:22 old-k8s-version-075400 kubelet[1653]: E0415 19:15:22.304799    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.428838    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:23 old-k8s-version-075400 kubelet[1653]: E0415 19:15:23.511199    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.428838    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:36 old-k8s-version-075400 kubelet[1653]: E0415 19:15:36.301124    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.428838    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:38 old-k8s-version-075400 kubelet[1653]: E0415 19:15:38.301726    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:47 old-k8s-version-075400 kubelet[1653]: E0415 19:15:47.298701    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:50 old-k8s-version-075400 kubelet[1653]: E0415 19:15:50.298915    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298116    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298347    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:15 old-k8s-version-075400 kubelet[1653]: E0415 19:16:15.317907    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.430829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:16 old-k8s-version-075400 kubelet[1653]: E0415 19:16:16.301254    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.430829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:27 old-k8s-version-075400 kubelet[1653]: E0415 19:16:27.297988    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.430829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:28 old-k8s-version-075400 kubelet[1653]: E0415 19:16:28.299384    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.430829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.296089    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.431829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.431829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.431829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0415 19:16:59.431829    6796 logs.go:123] Gathering logs for kube-apiserver [57985fd9aaa0] ...
	I0415 19:16:59.431829    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57985fd9aaa0"
	I0415 19:16:59.514484    6796 logs.go:123] Gathering logs for storage-provisioner [571b76208459] ...
	I0415 19:16:59.514484    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571b76208459"
	I0415 19:16:59.558800    6796 logs.go:123] Gathering logs for kubernetes-dashboard [3733b3515cc0] ...
	I0415 19:16:59.558895    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3733b3515cc0"
	I0415 19:16:59.604854    6796 logs.go:123] Gathering logs for kube-proxy [99e1c3d6c49a] ...
	I0415 19:16:59.604854    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99e1c3d6c49a"
	I0415 19:16:59.662985    6796 logs.go:123] Gathering logs for kube-controller-manager [83c733fc4b92] ...
	I0415 19:16:59.663104    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83c733fc4b92"
	I0415 19:16:59.731763    6796 logs.go:123] Gathering logs for etcd [348005d1b56d] ...
	I0415 19:16:59.731763    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348005d1b56d"
	I0415 19:16:59.792382    6796 logs.go:123] Gathering logs for etcd [21a6d7992112] ...
	I0415 19:16:59.792382    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a6d7992112"
	I0415 19:16:59.858510    6796 logs.go:123] Gathering logs for coredns [11235e2fd801] ...
	I0415 19:16:59.858510    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11235e2fd801"
	I0415 19:16:59.920526    6796 logs.go:123] Gathering logs for kube-controller-manager [2a2a08cf9f78] ...
	I0415 19:16:59.920526    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2a08cf9f78"
	I0415 19:16:59.991499    6796 logs.go:123] Gathering logs for dmesg ...
	I0415 19:16:59.991499    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 19:17:00.029972    6796 logs.go:123] Gathering logs for describe nodes ...
	I0415 19:17:00.029972    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 19:17:00.305282    6796 logs.go:123] Gathering logs for kube-proxy [665d0d639f5b] ...
	I0415 19:17:00.305282    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 665d0d639f5b"
	I0415 19:17:00.372557    6796 logs.go:123] Gathering logs for storage-provisioner [4cc2abc051be] ...
	I0415 19:17:00.372557    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cc2abc051be"
	I0415 19:17:00.434488    6796 logs.go:123] Gathering logs for Docker ...
	I0415 19:17:00.434488    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 19:17:00.480478    6796 logs.go:123] Gathering logs for kube-scheduler [8edd897fcde4] ...
	I0415 19:17:00.480478    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8edd897fcde4"
	I0415 19:17:00.533492    6796 logs.go:123] Gathering logs for kube-scheduler [29ce0c645316] ...
	I0415 19:17:00.534483    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29ce0c645316"
	I0415 19:17:00.590088    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:17:00.590088    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 19:17:00.590088    6796 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:28 old-k8s-version-075400 kubelet[1653]: E0415 19:16:28.299384    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:16:28 old-k8s-version-075400 kubelet[1653]: E0415 19:16:28.299384    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.296089    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.296089    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0415 19:17:00.590088    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:17:00.591087    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:17:10.630553    6796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 19:17:10.662391    6796 api_server.go:72] duration metric: took 5m50.6258279s to wait for apiserver process to appear ...
	I0415 19:17:10.662391    6796 api_server.go:88] waiting for apiserver healthz status ...
	I0415 19:17:10.672384    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 19:17:10.720882    6796 logs.go:276] 2 containers: [57985fd9aaa0 625287f80046]
	I0415 19:17:10.731881    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 19:17:10.770047    6796 logs.go:276] 2 containers: [348005d1b56d 21a6d7992112]
	I0415 19:17:10.780391    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 19:17:10.821766    6796 logs.go:276] 2 containers: [11235e2fd801 3a281f046968]
	I0415 19:17:10.831763    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 19:17:10.871247    6796 logs.go:276] 2 containers: [8edd897fcde4 29ce0c645316]
	I0415 19:17:10.882722    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 19:17:10.928528    6796 logs.go:276] 2 containers: [99e1c3d6c49a 665d0d639f5b]
	I0415 19:17:10.936514    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 19:17:10.978254    6796 logs.go:276] 2 containers: [2a2a08cf9f78 83c733fc4b92]
	I0415 19:17:10.988270    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 19:17:11.034097    6796 logs.go:276] 0 containers: []
	W0415 19:17:11.034097    6796 logs.go:278] No container was found matching "kindnet"
	I0415 19:17:11.043287    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0415 19:17:11.091656    6796 logs.go:276] 1 containers: [3733b3515cc0]
	I0415 19:17:11.100652    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 19:17:11.142834    6796 logs.go:276] 2 containers: [4cc2abc051be 571b76208459]
	I0415 19:17:11.142834    6796 logs.go:123] Gathering logs for dmesg ...
	I0415 19:17:11.142834    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 19:17:11.168829    6796 logs.go:123] Gathering logs for kube-scheduler [8edd897fcde4] ...
	I0415 19:17:11.168829    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8edd897fcde4"
	I0415 19:17:11.219829    6796 logs.go:123] Gathering logs for kube-scheduler [29ce0c645316] ...
	I0415 19:17:11.219829    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29ce0c645316"
	I0415 19:17:11.287173    6796 logs.go:123] Gathering logs for kube-proxy [99e1c3d6c49a] ...
	I0415 19:17:11.287173    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99e1c3d6c49a"
	I0415 19:17:11.332866    6796 logs.go:123] Gathering logs for kube-controller-manager [83c733fc4b92] ...
	I0415 19:17:11.332866    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83c733fc4b92"
	I0415 19:17:11.401585    6796 logs.go:123] Gathering logs for kube-apiserver [57985fd9aaa0] ...
	I0415 19:17:11.401585    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57985fd9aaa0"
	I0415 19:17:11.470592    6796 logs.go:123] Gathering logs for kube-controller-manager [2a2a08cf9f78] ...
	I0415 19:17:11.470592    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2a08cf9f78"
	I0415 19:17:11.534523    6796 logs.go:123] Gathering logs for kubernetes-dashboard [3733b3515cc0] ...
	I0415 19:17:11.534523    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3733b3515cc0"
	I0415 19:17:11.581530    6796 logs.go:123] Gathering logs for storage-provisioner [571b76208459] ...
	I0415 19:17:11.581530    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571b76208459"
	I0415 19:17:11.625531    6796 logs.go:123] Gathering logs for container status ...
	I0415 19:17:11.625531    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 19:17:11.722523    6796 logs.go:123] Gathering logs for kubelet ...
	I0415 19:17:11.722523    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 19:17:11.804549    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:38 old-k8s-version-075400 kubelet[1653]: E0415 19:11:38.208713    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.806547    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:38 old-k8s-version-075400 kubelet[1653]: E0415 19:11:38.923145    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.807548    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:40 old-k8s-version-075400 kubelet[1653]: E0415 19:11:40.139646    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.809544    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:55 old-k8s-version-075400 kubelet[1653]: E0415 19:11:55.520427    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.813545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:00 old-k8s-version-075400 kubelet[1653]: E0415 19:12:00.414418    1653 pod_workers.go:191] Error syncing pod bc788811-26ce-487a-ba75-ce0fe2ecbb60 ("storage-provisioner_kube-system(bc788811-26ce-487a-ba75-ce0fe2ecbb60)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc788811-26ce-487a-ba75-ce0fe2ecbb60)"
	W0415 19:17:11.813545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:08 old-k8s-version-075400 kubelet[1653]: E0415 19:12:08.317183    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.816945    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:19 old-k8s-version-075400 kubelet[1653]: E0415 19:12:19.120308    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.817530    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:19 old-k8s-version-075400 kubelet[1653]: E0415 19:12:19.884729    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.819593    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:21 old-k8s-version-075400 kubelet[1653]: E0415 19:12:21.368820    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.822545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:31 old-k8s-version-075400 kubelet[1653]: E0415 19:12:31.804593    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.822545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:33 old-k8s-version-075400 kubelet[1653]: E0415 19:12:33.314894    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.822545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:45 old-k8s-version-075400 kubelet[1653]: E0415 19:12:45.315528    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.822545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:46 old-k8s-version-075400 kubelet[1653]: E0415 19:12:46.316103    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.824553    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:58 old-k8s-version-075400 kubelet[1653]: E0415 19:12:58.858560    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.825543    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:00 old-k8s-version-075400 kubelet[1653]: E0415 19:13:00.312864    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.825543    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:09 old-k8s-version-075400 kubelet[1653]: E0415 19:13:09.312646    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.827554    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:14 old-k8s-version-075400 kubelet[1653]: E0415 19:13:14.423669    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.827554    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:21 old-k8s-version-075400 kubelet[1653]: E0415 19:13:21.309832    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.828542    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:25 old-k8s-version-075400 kubelet[1653]: E0415 19:13:25.310091    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.828542    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:34 old-k8s-version-075400 kubelet[1653]: E0415 19:13:34.311220    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.828542    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:39 old-k8s-version-075400 kubelet[1653]: E0415 19:13:39.309519    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.832559    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:47 old-k8s-version-075400 kubelet[1653]: E0415 19:13:47.774939    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.833561    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:54 old-k8s-version-075400 kubelet[1653]: E0415 19:13:54.307787    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.833561    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:00 old-k8s-version-075400 kubelet[1653]: E0415 19:14:00.324109    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.834553    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:07 old-k8s-version-075400 kubelet[1653]: E0415 19:14:07.308728    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.834553    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:11 old-k8s-version-075400 kubelet[1653]: E0415 19:14:11.308825    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.834553    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:19 old-k8s-version-075400 kubelet[1653]: E0415 19:14:19.306081    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.835549    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:24 old-k8s-version-075400 kubelet[1653]: E0415 19:14:24.306865    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.835549    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:30 old-k8s-version-075400 kubelet[1653]: E0415 19:14:30.307010    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.836532    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:38 old-k8s-version-075400 kubelet[1653]: E0415 19:14:38.304934    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.838543    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:45 old-k8s-version-075400 kubelet[1653]: E0415 19:14:45.363819    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.838543    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:53 old-k8s-version-075400 kubelet[1653]: E0415 19:14:53.301644    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.838543    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:57 old-k8s-version-075400 kubelet[1653]: E0415 19:14:57.304954    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.839535    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:07 old-k8s-version-075400 kubelet[1653]: E0415 19:15:07.315392    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.839535    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:11 old-k8s-version-075400 kubelet[1653]: E0415 19:15:11.301274    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.839535    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:22 old-k8s-version-075400 kubelet[1653]: E0415 19:15:22.304799    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.841537    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:23 old-k8s-version-075400 kubelet[1653]: E0415 19:15:23.511199    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:36 old-k8s-version-075400 kubelet[1653]: E0415 19:15:36.301124    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:38 old-k8s-version-075400 kubelet[1653]: E0415 19:15:38.301726    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:47 old-k8s-version-075400 kubelet[1653]: E0415 19:15:47.298701    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:50 old-k8s-version-075400 kubelet[1653]: E0415 19:15:50.298915    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298116    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298347    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.843542    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:15 old-k8s-version-075400 kubelet[1653]: E0415 19:16:15.317907    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.843542    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:16 old-k8s-version-075400 kubelet[1653]: E0415 19:16:16.301254    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.843542    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:27 old-k8s-version-075400 kubelet[1653]: E0415 19:16:27.297988    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.843542    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:28 old-k8s-version-075400 kubelet[1653]: E0415 19:16:28.299384    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.296089    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:17:04 old-k8s-version-075400 kubelet[1653]: E0415 19:17:04.293137    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.845547    6796 logs.go:138] Found kubelet problem: Apr 15 19:17:09 old-k8s-version-075400 kubelet[1653]: E0415 19:17:09.296445    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0415 19:17:11.845547    6796 logs.go:123] Gathering logs for etcd [348005d1b56d] ...
	I0415 19:17:11.845547    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348005d1b56d"
	I0415 19:17:11.914532    6796 logs.go:123] Gathering logs for coredns [11235e2fd801] ...
	I0415 19:17:11.914532    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11235e2fd801"
	I0415 19:17:11.959547    6796 logs.go:123] Gathering logs for coredns [3a281f046968] ...
	I0415 19:17:11.959547    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a281f046968"
	I0415 19:17:12.022551    6796 logs.go:123] Gathering logs for kube-proxy [665d0d639f5b] ...
	I0415 19:17:12.022551    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 665d0d639f5b"
	I0415 19:17:12.070550    6796 logs.go:123] Gathering logs for describe nodes ...
	I0415 19:17:12.070550    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 19:17:12.302147    6796 logs.go:123] Gathering logs for kube-apiserver [625287f80046] ...
	I0415 19:17:12.302147    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 625287f80046"
	I0415 19:17:12.416138    6796 logs.go:123] Gathering logs for etcd [21a6d7992112] ...
	I0415 19:17:12.416138    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a6d7992112"
	I0415 19:17:12.474702    6796 logs.go:123] Gathering logs for storage-provisioner [4cc2abc051be] ...
	I0415 19:17:12.474702    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cc2abc051be"
	I0415 19:17:12.532328    6796 logs.go:123] Gathering logs for Docker ...
	I0415 19:17:12.532328    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 19:17:12.579365    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:17:12.579365    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 19:17:12.579365    6796 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:17:04 old-k8s-version-075400 kubelet[1653]: E0415 19:17:04.293137    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:17:04 old-k8s-version-075400 kubelet[1653]: E0415 19:17:04.293137    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:17:09 old-k8s-version-075400 kubelet[1653]: E0415 19:17:09.296445    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 15 19:17:09 old-k8s-version-075400 kubelet[1653]: E0415 19:17:09.296445    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0415 19:17:12.582645    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:17:12.582645    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:17:22.586242    6796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0415 19:17:22.607138    6796 api_server.go:279] https://127.0.0.1:56607/healthz returned 200:
	ok
	I0415 19:17:22.611198    6796 out.go:177] 
	W0415 19:17:22.613051    6796 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0415 19:17:22.613051    6796 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0415 19:17:22.613051    6796 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0415 19:17:22.613051    6796 out.go:239] * 
	* 
	W0415 19:17:22.614067    6796 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 19:17:22.616694    6796 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-075400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-075400
helpers_test.go:235: (dbg) docker inspect old-k8s-version-075400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56c40786cc0e6d9f8ce38ee70beef1a0c1c232f25b825be8f67186dd81e83a0e",
	        "Created": "2024-04-15T19:06:56.388837273Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-15T19:10:30.768501448Z",
	            "FinishedAt": "2024-04-15T19:10:25.933771912Z"
	        },
	        "Image": "sha256:06fc94f477def8d6ec1f9decaa8d9de4b332d5597cd1759a7075056e46e00dfc",
	        "ResolvConfPath": "/var/lib/docker/containers/56c40786cc0e6d9f8ce38ee70beef1a0c1c232f25b825be8f67186dd81e83a0e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56c40786cc0e6d9f8ce38ee70beef1a0c1c232f25b825be8f67186dd81e83a0e/hostname",
	        "HostsPath": "/var/lib/docker/containers/56c40786cc0e6d9f8ce38ee70beef1a0c1c232f25b825be8f67186dd81e83a0e/hosts",
	        "LogPath": "/var/lib/docker/containers/56c40786cc0e6d9f8ce38ee70beef1a0c1c232f25b825be8f67186dd81e83a0e/56c40786cc0e6d9f8ce38ee70beef1a0c1c232f25b825be8f67186dd81e83a0e-json.log",
	        "Name": "/old-k8s-version-075400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-075400:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-075400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3a5d15b830653a414525a8a1c32eb47b280221a4a065532fd3f1f36acf15ec84-init/diff:/var/lib/docker/overlay2/7d5cfefbd46c2f94744068cb810a43a2057da1935809c9054bd8d457b0f559e7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a5d15b830653a414525a8a1c32eb47b280221a4a065532fd3f1f36acf15ec84/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a5d15b830653a414525a8a1c32eb47b280221a4a065532fd3f1f36acf15ec84/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a5d15b830653a414525a8a1c32eb47b280221a4a065532fd3f1f36acf15ec84/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-075400",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-075400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-075400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-075400",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-075400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca659fbfa0e50b96a16c3d5ede92545d33f086b007f62141d47d75c6444fa148",
	            "SandboxKey": "/var/run/docker/netns/ca659fbfa0e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56608"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56610"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56611"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56612"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56607"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-075400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "e3c063fa4b55e4dd82d1d701274e6bf57873ed8b8b47289fc9d33b21aba409ab",
	                    "EndpointID": "1c83676a226c0c8213bb7a20a297a392fb620a1ce8b9eafa3bf375abf6d87bcb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-075400",
	                        "56c40786cc0e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-075400 -n old-k8s-version-075400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-075400 -n old-k8s-version-075400: (1.3021193s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-075400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-075400 logs -n 25: (2.8339165s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|----------------|---------------------|---------------------|
	| image   | no-preload-523900 image list                           | no-preload-523900            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	|         | --format=json                                          |                              |                   |                |                     |                     |
	| pause   | -p no-preload-523900                                   | no-preload-523900            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |                |                     |                     |
	| image   | embed-certs-362000 image list                          | embed-certs-362000           | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	|         | --format=json                                          |                              |                   |                |                     |                     |
	| pause   | -p embed-certs-362000                                  | embed-certs-362000           | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |                |                     |                     |
	| unpause | -p no-preload-523900                                   | no-preload-523900            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |                |                     |                     |
	| unpause | -p embed-certs-362000                                  | embed-certs-362000           | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |                |                     |                     |
	| delete  | -p no-preload-523900                                   | no-preload-523900            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	| delete  | -p embed-certs-362000                                  | embed-certs-362000           | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	| delete  | -p no-preload-523900                                   | no-preload-523900            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	| start   | -p newest-cni-003000 --memory=2200 --alsologtostderr   | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.30.0-rc.2      |                              |                   |                |                     |                     |
	| delete  | -p embed-certs-362000                                  | embed-certs-362000           | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:14 UTC | 15 Apr 24 19:14 UTC |
	| addons  | enable metrics-server -p newest-cni-003000             | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |                |                     |                     |
	| stop    | -p newest-cni-003000                                   | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-003000                  | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |                |                     |                     |
	| start   | -p newest-cni-003000 --memory=2200 --alsologtostderr   | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.30.0-rc.2      |                              |                   |                |                     |                     |
	| image   | default-k8s-diff-port-923600                           | default-k8s-diff-port-923600 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:16 UTC |
	|         | image list --format=json                               |                              |                   |                |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-923600 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:16 UTC |
	|         | default-k8s-diff-port-923600                           |                              |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |                |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-923600 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:16 UTC |
	|         | default-k8s-diff-port-923600                           |                              |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |                |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-923600 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:16 UTC |
	|         | default-k8s-diff-port-923600                           |                              |                   |                |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-923600 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:16 UTC | 15 Apr 24 19:16 UTC |
	|         | default-k8s-diff-port-923600                           |                              |                   |                |                     |                     |
	| image   | newest-cni-003000 image list                           | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:17 UTC | 15 Apr 24 19:17 UTC |
	|         | --format=json                                          |                              |                   |                |                     |                     |
	| pause   | -p newest-cni-003000                                   | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:17 UTC | 15 Apr 24 19:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |                |                     |                     |
	| unpause | -p newest-cni-003000                                   | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:17 UTC | 15 Apr 24 19:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |                |                     |                     |
	| delete  | -p newest-cni-003000                                   | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:17 UTC | 15 Apr 24 19:17 UTC |
	| delete  | -p newest-cni-003000                                   | newest-cni-003000            | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 19:17 UTC | 15 Apr 24 19:17 UTC |
	|---------|--------------------------------------------------------|------------------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 19:16:29
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 19:16:29.137150    8392 out.go:291] Setting OutFile to fd 1772 ...
	I0415 19:16:29.137150    8392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:16:29.137150    8392 out.go:304] Setting ErrFile to fd 1740...
	I0415 19:16:29.138144    8392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:16:29.159663    8392 out.go:298] Setting JSON to false
	I0415 19:16:29.162798    8392 start.go:129] hostinfo: {"hostname":"minikube4","uptime":25058,"bootTime":1713183530,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 19:16:29.163751    8392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 19:16:29.166753    8392 out.go:177] * [newest-cni-003000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 19:16:29.169522    8392 notify.go:220] Checking for updates...
	I0415 19:16:29.171715    8392 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 19:16:29.173850    8392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 19:16:29.175931    8392 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 19:16:29.178601    8392 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 19:16:29.180667    8392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 19:16:29.183380    8392 config.go:182] Loaded profile config "newest-cni-003000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.2
	I0415 19:16:29.184351    8392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 19:16:29.468463    8392 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 19:16:29.478121    8392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 19:16:29.796366    8392 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:92 SystemTime:2024-04-15 19:16:29.758248159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 19:16:29.801391    8392 out.go:177] * Using the docker driver based on existing profile
	I0415 19:16:29.803390    8392 start.go:297] selected driver: docker
	I0415 19:16:29.803390    8392 start.go:901] validating driver "docker" against &{Name:newest-cni-003000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-003000 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:16:29.803390    8392 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 19:16:29.938326    8392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 19:16:30.307677    8392 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:92 SystemTime:2024-04-15 19:16:30.259949465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 19:16:30.308679    8392 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0415 19:16:30.308679    8392 cni.go:84] Creating CNI manager for ""
	I0415 19:16:30.308679    8392 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 19:16:30.308679    8392 start.go:340] cluster config:
	{Name:newest-cni-003000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:16:30.311710    8392 out.go:177] * Starting "newest-cni-003000" primary control-plane node in "newest-cni-003000" cluster
	I0415 19:16:30.313683    8392 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 19:16:30.317679    8392 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 19:16:30.319679    8392 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 19:16:30.319679    8392 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 19:16:30.319679    8392 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 19:16:30.319679    8392 cache.go:56] Caching tarball of preloaded images
	I0415 19:16:30.320680    8392 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 19:16:30.320680    8392 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 19:16:30.320680    8392 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\config.json ...
	I0415 19:16:30.496165    8392 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 19:16:30.496165    8392 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 19:16:30.496165    8392 cache.go:194] Successfully downloaded all kic artifacts
	I0415 19:16:30.496165    8392 start.go:360] acquireMachinesLock for newest-cni-003000: {Name:mk27c2f190c70ba2dbef6526aac25713fcfd3a83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 19:16:30.496165    8392 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-003000"
	I0415 19:16:30.496847    8392 start.go:96] Skipping create...Using existing machine configuration
	I0415 19:16:30.496847    8392 fix.go:54] fixHost starting: 
	I0415 19:16:30.513577    8392 cli_runner.go:164] Run: docker container inspect newest-cni-003000 --format={{.State.Status}}
	I0415 19:16:30.697380    8392 fix.go:112] recreateIfNeeded on newest-cni-003000: state=Stopped err=<nil>
	W0415 19:16:30.697417    8392 fix.go:138] unexpected machine state, will restart: <nil>
	I0415 19:16:30.700959    8392 out.go:177] * Restarting existing docker container for "newest-cni-003000" ...
	I0415 19:16:28.450028    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:30.456040    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:32.460940    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:30.714309    8392 cli_runner.go:164] Run: docker start newest-cni-003000
	I0415 19:16:31.387737    8392 cli_runner.go:164] Run: docker container inspect newest-cni-003000 --format={{.State.Status}}
	I0415 19:16:31.581976    8392 kic.go:430] container "newest-cni-003000" state is running.
	I0415 19:16:31.594971    8392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-003000
	I0415 19:16:31.768604    8392 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\config.json ...
	I0415 19:16:31.771812    8392 machine.go:94] provisionDockerMachine start ...
	I0415 19:16:31.780610    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:31.980655    8392 main.go:141] libmachine: Using SSH client type: native
	I0415 19:16:31.981650    8392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56964 <nil> <nil>}
	I0415 19:16:31.981650    8392 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 19:16:31.985624    8392 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0415 19:16:34.959134    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:37.454839    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:35.169623    8392 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-003000
	
	I0415 19:16:35.169623    8392 ubuntu.go:169] provisioning hostname "newest-cni-003000"
	I0415 19:16:35.180311    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:35.361439    8392 main.go:141] libmachine: Using SSH client type: native
	I0415 19:16:35.362091    8392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56964 <nil> <nil>}
	I0415 19:16:35.362091    8392 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-003000 && echo "newest-cni-003000" | sudo tee /etc/hostname
	I0415 19:16:35.558516    8392 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-003000
	
	I0415 19:16:35.569479    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:35.752271    8392 main.go:141] libmachine: Using SSH client type: native
	I0415 19:16:35.752271    8392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56964 <nil> <nil>}
	I0415 19:16:35.752271    8392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-003000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-003000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-003000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 19:16:35.922339    8392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:16:35.922339    8392 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0415 19:16:35.922339    8392 ubuntu.go:177] setting up certificates
	I0415 19:16:35.922339    8392 provision.go:84] configureAuth start
	I0415 19:16:35.935078    8392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-003000
	I0415 19:16:36.103569    8392 provision.go:143] copyHostCerts
	I0415 19:16:36.104527    8392 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0415 19:16:36.104604    8392 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0415 19:16:36.105115    8392 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0415 19:16:36.106522    8392 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0415 19:16:36.106558    8392 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0415 19:16:36.106558    8392 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0415 19:16:36.107924    8392 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0415 19:16:36.107924    8392 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0415 19:16:36.107924    8392 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0415 19:16:36.109433    8392 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-003000 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-003000]
	I0415 19:16:36.378267    8392 provision.go:177] copyRemoteCerts
	I0415 19:16:36.393095    8392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 19:16:36.401857    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:36.573927    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:36.698937    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0415 19:16:36.739904    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0415 19:16:36.780800    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 19:16:36.823023    8392 provision.go:87] duration metric: took 900.6428ms to configureAuth
	I0415 19:16:36.823023    8392 ubuntu.go:193] setting minikube options for container-runtime
	I0415 19:16:36.823023    8392 config.go:182] Loaded profile config "newest-cni-003000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.2
	I0415 19:16:36.833526    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:37.013143    8392 main.go:141] libmachine: Using SSH client type: native
	I0415 19:16:37.013753    8392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56964 <nil> <nil>}
	I0415 19:16:37.013753    8392 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 19:16:37.191591    8392 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0415 19:16:37.191591    8392 ubuntu.go:71] root file system type: overlay
	I0415 19:16:37.191591    8392 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 19:16:37.200488    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:37.356338    8392 main.go:141] libmachine: Using SSH client type: native
	I0415 19:16:37.356907    8392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56964 <nil> <nil>}
	I0415 19:16:37.357030    8392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 19:16:37.551116    8392 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 19:16:37.563166    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:37.732258    8392 main.go:141] libmachine: Using SSH client type: native
	I0415 19:16:37.732258    8392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xeda1c0] 0xedcda0 <nil>  [] 0s} 127.0.0.1 56964 <nil> <nil>}
	I0415 19:16:37.732258    8392 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 19:16:37.928005    8392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 19:16:37.928005    8392 machine.go:97] duration metric: took 6.1559078s to provisionDockerMachine
	I0415 19:16:37.928005    8392 start.go:293] postStartSetup for "newest-cni-003000" (driver="docker")
	I0415 19:16:37.928096    8392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 19:16:37.943574    8392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 19:16:37.954343    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:38.126245    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:38.263180    8392 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 19:16:38.272265    8392 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0415 19:16:38.272265    8392 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0415 19:16:38.272265    8392 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0415 19:16:38.272265    8392 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0415 19:16:38.272265    8392 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0415 19:16:38.272265    8392 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0415 19:16:38.273615    8392 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem -> 117482.pem in /etc/ssl/certs
	I0415 19:16:38.284472    8392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 19:16:38.307178    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem --> /etc/ssl/certs/117482.pem (1708 bytes)
	I0415 19:16:38.353034    8392 start.go:296] duration metric: took 424.9182ms for postStartSetup
	I0415 19:16:38.365024    8392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 19:16:38.374029    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:38.546787    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:38.674808    8392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 19:16:38.688093    8392 fix.go:56] duration metric: took 8.1908673s for fixHost
	I0415 19:16:38.688093    8392 start.go:83] releasing machines lock for "newest-cni-003000", held for 8.1909623s
	I0415 19:16:38.697929    8392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-003000
	I0415 19:16:38.877225    8392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 19:16:38.887773    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:38.891558    8392 ssh_runner.go:195] Run: cat /version.json
	I0415 19:16:38.900588    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:39.077860    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:39.092709    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:39.233216    8392 ssh_runner.go:195] Run: systemctl --version
	I0415 19:16:39.381193    8392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 19:16:39.409400    8392 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0415 19:16:39.429638    8392 start.go:438] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0415 19:16:39.440635    8392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 19:16:39.459694    8392 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0415 19:16:39.459694    8392 start.go:494] detecting cgroup driver to use...
	I0415 19:16:39.459694    8392 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 19:16:39.459694    8392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:16:39.503276    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 19:16:39.538193    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 19:16:39.558028    8392 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 19:16:39.571855    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 19:16:39.604888    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:16:39.642481    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 19:16:39.676127    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 19:16:39.707570    8392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 19:16:39.736592    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 19:16:39.773967    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 19:16:39.801967    8392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 19:16:39.834464    8392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 19:16:39.864944    8392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 19:16:39.898459    8392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:16:40.051321    8392 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 19:16:40.243558    8392 start.go:494] detecting cgroup driver to use...
	I0415 19:16:40.243558    8392 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 19:16:40.255568    8392 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 19:16:40.277571    8392 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0415 19:16:40.294572    8392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 19:16:40.323570    8392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 19:16:40.364569    8392 ssh_runner.go:195] Run: which cri-dockerd
	I0415 19:16:40.393585    8392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 19:16:40.412588    8392 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 19:16:40.462572    8392 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 19:16:40.729340    8392 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 19:16:40.920748    8392 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 19:16:40.920748    8392 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 19:16:40.972209    8392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:16:41.152096    8392 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 19:16:41.888529    8392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 19:16:41.925540    8392 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0415 19:16:41.965533    8392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:16:41.998550    8392 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 19:16:42.141063    8392 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 19:16:42.296883    8392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:16:42.458601    8392 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 19:16:42.502800    8392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 19:16:42.538874    8392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:16:42.674934    8392 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 19:16:42.847328    8392 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 19:16:42.862046    8392 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 19:16:42.872633    8392 start.go:562] Will wait 60s for crictl version
	I0415 19:16:42.890641    8392 ssh_runner.go:195] Run: which crictl
	I0415 19:16:42.920636    8392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 19:16:43.013989    8392 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0415 19:16:43.026016    8392 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:16:43.088992    8392 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 19:16:39.456744    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:41.955538    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:43.135985    8392 out.go:204] * Preparing Kubernetes v1.30.0-rc.2 on Docker 26.0.1 ...
	I0415 19:16:43.144999    8392 cli_runner.go:164] Run: docker exec -t newest-cni-003000 dig +short host.docker.internal
	I0415 19:16:43.414709    8392 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0415 19:16:43.428712    8392 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0415 19:16:43.436707    8392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:16:43.469228    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:43.654840    8392 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0415 19:16:43.656841    8392 kubeadm.go:877] updating cluster {Name:newest-cni-003000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 19:16:43.656841    8392 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 19:16:43.664806    8392 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 19:16:43.699813    8392 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	registry.k8s.io/kube-proxy:v1.30.0-rc.2
	registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 19:16:43.699813    8392 docker.go:615] Images already preloaded, skipping extraction
	I0415 19:16:43.713745    8392 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 19:16:43.756035    8392 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	registry.k8s.io/kube-proxy:v1.30.0-rc.2
	registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 19:16:43.756035    8392 cache_images.go:84] Images are preloaded, skipping loading
	I0415 19:16:43.756035    8392 kubeadm.go:928] updating node { 192.168.94.2 8443 v1.30.0-rc.2 docker true true} ...
	I0415 19:16:43.756035    8392 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-003000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 19:16:43.764990    8392 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 19:16:43.864800    8392 cni.go:84] Creating CNI manager for ""
	I0415 19:16:43.864876    8392 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 19:16:43.864876    8392 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0415 19:16:43.864876    8392 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-003000 NodeName:newest-cni-003000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 19:16:43.865162    8392 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-003000"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 19:16:43.878465    8392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0415 19:16:43.898113    8392 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 19:16:43.910425    8392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 19:16:43.935375    8392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0415 19:16:43.967778    8392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0415 19:16:43.995738    8392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2288 bytes)
	I0415 19:16:44.036712    8392 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0415 19:16:44.047708    8392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 19:16:44.081729    8392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:16:44.239515    8392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:16:44.262507    8392 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000 for IP: 192.168.94.2
	I0415 19:16:44.262507    8392 certs.go:194] generating shared ca certs ...
	I0415 19:16:44.262507    8392 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:16:44.262507    8392 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0415 19:16:44.262507    8392 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0415 19:16:44.263528    8392 certs.go:256] generating profile certs ...
	I0415 19:16:44.263528    8392 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\client.key
	I0415 19:16:44.263528    8392 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\apiserver.key.a42ebac6
	I0415 19:16:44.264532    8392 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\proxy-client.key
	I0415 19:16:44.265511    8392 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem (1338 bytes)
	W0415 19:16:44.265511    8392 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748_empty.pem, impossibly tiny 0 bytes
	I0415 19:16:44.265511    8392 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0415 19:16:44.265511    8392 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0415 19:16:44.265511    8392 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0415 19:16:44.266522    8392 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0415 19:16:44.266522    8392 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem (1708 bytes)
	I0415 19:16:44.267527    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 19:16:44.313073    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0415 19:16:44.365953    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 19:16:44.412967    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 19:16:44.466950    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 19:16:44.514598    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 19:16:44.694977    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 19:16:44.806370    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-003000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 19:16:44.884896    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 19:16:44.933797    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11748.pem --> /usr/share/ca-certificates/11748.pem (1338 bytes)
	I0415 19:16:45.004935    8392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117482.pem --> /usr/share/ca-certificates/117482.pem (1708 bytes)
	I0415 19:16:45.055017    8392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 19:16:45.100449    8392 ssh_runner.go:195] Run: openssl version
	I0415 19:16:45.128710    8392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 19:16:45.156691    8392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:16:45.167975    8392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:16:45.181312    8392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 19:16:45.204995    8392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 19:16:45.248473    8392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11748.pem && ln -fs /usr/share/ca-certificates/11748.pem /etc/ssl/certs/11748.pem"
	I0415 19:16:45.293833    8392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11748.pem
	I0415 19:16:45.303825    8392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 17:52 /usr/share/ca-certificates/11748.pem
	I0415 19:16:45.315829    8392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11748.pem
	I0415 19:16:45.344835    8392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11748.pem /etc/ssl/certs/51391683.0"
	I0415 19:16:45.375838    8392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117482.pem && ln -fs /usr/share/ca-certificates/117482.pem /etc/ssl/certs/117482.pem"
	I0415 19:16:45.412839    8392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117482.pem
	I0415 19:16:45.424828    8392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 17:52 /usr/share/ca-certificates/117482.pem
	I0415 19:16:45.437867    8392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117482.pem
	I0415 19:16:45.468837    8392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117482.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 19:16:45.507240    8392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 19:16:45.529252    8392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0415 19:16:45.552281    8392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0415 19:16:45.577265    8392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0415 19:16:45.608273    8392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0415 19:16:45.638286    8392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0415 19:16:45.669252    8392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0415 19:16:45.696268    8392 kubeadm.go:391] StartCluster: {Name:newest-cni-003000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:newest-cni-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequest
ed:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 19:16:45.711263    8392 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 19:16:46.011839    8392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0415 19:16:46.098545    8392 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0415 19:16:46.098545    8392 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0415 19:16:46.098545    8392 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0415 19:16:46.112414    8392 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0415 19:16:46.194061    8392 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0415 19:16:46.215055    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:46.418073    8392 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-003000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 19:16:46.419080    8392 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-003000" cluster setting kubeconfig missing "newest-cni-003000" context setting]
	I0415 19:16:46.421080    8392 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:16:46.453061    8392 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0415 19:16:46.484083    8392 kubeadm.go:624] The running cluster does not require reconfiguration: 127.0.0.1
	I0415 19:16:46.484083    8392 kubeadm.go:591] duration metric: took 385.52ms to restartPrimaryControlPlane
	I0415 19:16:46.484083    8392 kubeadm.go:393] duration metric: took 787.7781ms to StartCluster
	I0415 19:16:46.484083    8392 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:16:46.484083    8392 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 19:16:46.487105    8392 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 19:16:46.489078    8392 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 19:16:46.496064    8392 out.go:177] * Verifying Kubernetes components...
	I0415 19:16:46.489078    8392 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 19:16:46.496064    8392 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-003000"
	I0415 19:16:46.498061    8392 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-003000"
	I0415 19:16:46.489078    8392 config.go:182] Loaded profile config "newest-cni-003000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.2
	I0415 19:16:46.496064    8392 addons.go:69] Setting dashboard=true in profile "newest-cni-003000"
	I0415 19:16:46.496064    8392 addons.go:69] Setting metrics-server=true in profile "newest-cni-003000"
	I0415 19:16:46.496064    8392 addons.go:69] Setting default-storageclass=true in profile "newest-cni-003000"
	I0415 19:16:46.498061    8392 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-003000"
	W0415 19:16:46.498061    8392 addons.go:243] addon storage-provisioner should already be in state true
	I0415 19:16:46.498061    8392 addons.go:234] Setting addon dashboard=true in "newest-cni-003000"
	W0415 19:16:46.498061    8392 addons.go:243] addon dashboard should already be in state true
	I0415 19:16:46.498061    8392 host.go:66] Checking if "newest-cni-003000" exists ...
	I0415 19:16:46.498061    8392 addons.go:234] Setting addon metrics-server=true in "newest-cni-003000"
	W0415 19:16:46.498061    8392 addons.go:243] addon metrics-server should already be in state true
	I0415 19:16:46.498061    8392 host.go:66] Checking if "newest-cni-003000" exists ...
	I0415 19:16:46.498061    8392 host.go:66] Checking if "newest-cni-003000" exists ...
	I0415 19:16:46.524396    8392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 19:16:46.534758    8392 cli_runner.go:164] Run: docker container inspect newest-cni-003000 --format={{.State.Status}}
	I0415 19:16:46.535754    8392 cli_runner.go:164] Run: docker container inspect newest-cni-003000 --format={{.State.Status}}
	I0415 19:16:46.539757    8392 cli_runner.go:164] Run: docker container inspect newest-cni-003000 --format={{.State.Status}}
	I0415 19:16:46.540760    8392 cli_runner.go:164] Run: docker container inspect newest-cni-003000 --format={{.State.Status}}
	I0415 19:16:46.767418    8392 addons.go:234] Setting addon default-storageclass=true in "newest-cni-003000"
	W0415 19:16:46.767418    8392 addons.go:243] addon default-storageclass should already be in state true
	I0415 19:16:46.767418    8392 host.go:66] Checking if "newest-cni-003000" exists ...
	I0415 19:16:46.784430    8392 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 19:16:46.787420    8392 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:16:46.787420    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 19:16:46.801437    8392 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0415 19:16:46.800432    8392 cli_runner.go:164] Run: docker container inspect newest-cni-003000 --format={{.State.Status}}
	I0415 19:16:46.803429    8392 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0415 19:16:46.803429    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0415 19:16:46.804448    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:46.816431    8392 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0415 19:16:46.820440    8392 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0415 19:16:44.449962    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:46.459052    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:46.824432    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0415 19:16:46.824432    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0415 19:16:46.820440    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:46.841421    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:47.016826    8392 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 19:16:47.016826    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 19:16:47.030829    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:47.032805    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:47.049812    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:47.066811    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:47.120811    8392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 19:16:47.239841    8392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56964 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-003000\id_rsa Username:docker}
	I0415 19:16:47.304843    8392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-003000
	I0415 19:16:47.495683    8392 api_server.go:52] waiting for apiserver process to appear ...
	I0415 19:16:47.509638    8392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 19:16:47.515635    8392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 19:16:47.800635    8392 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0415 19:16:47.800635    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0415 19:16:47.823831    8392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 19:16:47.886026    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0415 19:16:47.886026    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0415 19:16:48.185968    8392 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0415 19:16:48.185968    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0415 19:16:48.203968    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0415 19:16:48.204973    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0415 19:16:48.400959    8392 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 19:16:48.401968    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0415 19:16:48.499987    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0415 19:16:48.499987    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0415 19:16:48.711020    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0415 19:16:48.711020    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0415 19:16:48.719024    8392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 19:16:48.886298    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0415 19:16:48.886298    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0415 19:16:48.993960    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0415 19:16:48.993960    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0415 19:16:49.099700    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0415 19:16:49.099700    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0415 19:16:48.465965    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:50.952175    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:52.955439    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:49.198689    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0415 19:16:49.198689    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0415 19:16:49.283686    8392 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0415 19:16:49.283686    8392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0415 19:16:49.417226    8392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0415 19:16:54.965930    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:57.463789    6796 pod_ready.go:102] pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace has status "Ready":"False"
	I0415 19:16:58.100583    8392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.5904528s)
	I0415 19:16:58.100583    8392 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.5844563s)
	I0415 19:16:58.100583    8392 api_server.go:72] duration metric: took 11.6109663s to wait for apiserver process to appear ...
	I0415 19:16:58.100583    8392 api_server.go:88] waiting for apiserver healthz status ...
	I0415 19:16:58.100583    8392 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56968/healthz ...
	I0415 19:16:58.100583    8392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.276275s)
	I0415 19:16:58.100583    8392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.3811236s)
	I0415 19:16:58.100583    8392 addons.go:470] Verifying addon metrics-server=true in "newest-cni-003000"
	I0415 19:16:58.190436    8392 api_server.go:279] https://127.0.0.1:56968/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0415 19:16:58.190436    8392 api_server.go:103] status: https://127.0.0.1:56968/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0415 19:16:58.614620    8392 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56968/healthz ...
	I0415 19:16:58.687698    8392 api_server.go:279] https://127.0.0.1:56968/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0415 19:16:58.687698    8392 api_server.go:103] status: https://127.0.0.1:56968/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0415 19:16:58.798057    8392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.3803483s)
	I0415 19:16:58.802047    8392 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-003000 addons enable metrics-server
	
	I0415 19:16:58.807046    8392 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0415 19:16:58.810026    8392 addons.go:505] duration metric: took 12.3203773s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0415 19:16:59.106479    8392 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56968/healthz ...
	I0415 19:16:59.189474    8392 api_server.go:279] https://127.0.0.1:56968/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0415 19:16:59.189474    8392 api_server.go:103] status: https://127.0.0.1:56968/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0415 19:16:59.607619    8392 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56968/healthz ...
	I0415 19:16:59.621666    8392 api_server.go:279] https://127.0.0.1:56968/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0415 19:16:59.621666    8392 api_server.go:103] status: https://127.0.0.1:56968/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0415 19:17:00.111787    8392 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56968/healthz ...
	I0415 19:17:00.191442    8392 api_server.go:279] https://127.0.0.1:56968/healthz returned 200:
	ok
	I0415 19:17:00.211748    8392 api_server.go:141] control plane version: v1.30.0-rc.2
	I0415 19:17:00.211748    8392 api_server.go:131] duration metric: took 2.111067s to wait for apiserver health ...
	I0415 19:17:00.211748    8392 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 19:17:00.299288    8392 system_pods.go:59] 8 kube-system pods found
	I0415 19:17:00.299389    8392 system_pods.go:61] "coredns-7db6d8ff4d-79fm9" [1e03473f-417a-4881-a6ab-e9c507808933] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0415 19:17:00.299389    8392 system_pods.go:61] "etcd-newest-cni-003000" [54b8fd30-901b-41ad-a9f5-0a594e933d6d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0415 19:17:00.299389    8392 system_pods.go:61] "kube-apiserver-newest-cni-003000" [be6644c5-01b7-4e32-97e3-ad1ebe9bc220] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0415 19:17:00.299389    8392 system_pods.go:61] "kube-controller-manager-newest-cni-003000" [8fb1f9b2-4927-4153-a06b-b4e6edc5fbed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0415 19:17:00.299389    8392 system_pods.go:61] "kube-proxy-qbmr6" [b9045679-1d01-46ce-b5c9-3f332e063375] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0415 19:17:00.299389    8392 system_pods.go:61] "kube-scheduler-newest-cni-003000" [b339019c-d0e3-4e87-8a81-1575f062cfeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0415 19:17:00.299568    8392 system_pods.go:61] "metrics-server-569cc877fc-qgv62" [99f2547b-6f06-4832-ac1e-ef6a4cddb311] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 19:17:00.299641    8392 system_pods.go:61] "storage-provisioner" [1dcf4fa5-267a-47c6-a3b5-3517169cc1a5] Running
	I0415 19:17:00.299641    8392 system_pods.go:74] duration metric: took 87.8891ms to wait for pod list to return data ...
	I0415 19:17:00.299641    8392 default_sa.go:34] waiting for default service account to be created ...
	I0415 19:17:00.310305    8392 default_sa.go:45] found service account: "default"
	I0415 19:17:00.310305    8392 default_sa.go:55] duration metric: took 10.664ms for default service account to be created ...
	I0415 19:17:00.310305    8392 kubeadm.go:576] duration metric: took 13.8205864s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0415 19:17:00.310305    8392 node_conditions.go:102] verifying NodePressure condition ...
	I0415 19:17:00.383485    8392 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0415 19:17:00.383485    8392 node_conditions.go:123] node cpu capacity is 16
	I0415 19:17:00.383485    8392 node_conditions.go:105] duration metric: took 73.1769ms to run NodePressure ...
	I0415 19:17:00.383485    8392 start.go:240] waiting for startup goroutines ...
	I0415 19:17:00.383485    8392 start.go:245] waiting for cluster config update ...
	I0415 19:17:00.383485    8392 start.go:254] writing updated cluster config ...
	I0415 19:17:00.404478    8392 ssh_runner.go:195] Run: rm -f paused
	I0415 19:17:00.560483    8392 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0415 19:17:00.563068    8392 out.go:177] * Done! kubectl is now configured to use "newest-cni-003000" cluster and "default" namespace by default
	I0415 19:16:58.452616    6796 pod_ready.go:81] duration metric: took 4m0.0222344s for pod "metrics-server-9975d5f86-8mprn" in "kube-system" namespace to be "Ready" ...
	E0415 19:16:58.452616    6796 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0415 19:16:58.452616    6796 pod_ready.go:38] duration metric: took 5m25.6208335s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 19:16:58.452616    6796 api_server.go:52] waiting for apiserver process to appear ...
	I0415 19:16:58.462643    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 19:16:58.524624    6796 logs.go:276] 2 containers: [57985fd9aaa0 625287f80046]
	I0415 19:16:58.535613    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 19:16:58.573620    6796 logs.go:276] 2 containers: [348005d1b56d 21a6d7992112]
	I0415 19:16:58.592634    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 19:16:58.649625    6796 logs.go:276] 2 containers: [11235e2fd801 3a281f046968]
	I0415 19:16:58.659623    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 19:16:58.706936    6796 logs.go:276] 2 containers: [8edd897fcde4 29ce0c645316]
	I0415 19:16:58.721950    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 19:16:58.767031    6796 logs.go:276] 2 containers: [99e1c3d6c49a 665d0d639f5b]
	I0415 19:16:58.776025    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 19:16:58.833396    6796 logs.go:276] 2 containers: [2a2a08cf9f78 83c733fc4b92]
	I0415 19:16:58.842398    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 19:16:58.893288    6796 logs.go:276] 0 containers: []
	W0415 19:16:58.893288    6796 logs.go:278] No container was found matching "kindnet"
	I0415 19:16:58.911260    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 19:16:58.958871    6796 logs.go:276] 2 containers: [4cc2abc051be 571b76208459]
	I0415 19:16:58.968875    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0415 19:16:59.018472    6796 logs.go:276] 1 containers: [3733b3515cc0]
	I0415 19:16:59.019501    6796 logs.go:123] Gathering logs for kube-apiserver [625287f80046] ...
	I0415 19:16:59.019501    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 625287f80046"
	I0415 19:16:59.151478    6796 logs.go:123] Gathering logs for coredns [3a281f046968] ...
	I0415 19:16:59.151478    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a281f046968"
	I0415 19:16:59.213229    6796 logs.go:123] Gathering logs for container status ...
	I0415 19:16:59.213229    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 19:16:59.293059    6796 logs.go:123] Gathering logs for kubelet ...
	I0415 19:16:59.293059    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 19:16:59.391148    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:38 old-k8s-version-075400 kubelet[1653]: E0415 19:11:38.208713    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.395137    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:38 old-k8s-version-075400 kubelet[1653]: E0415 19:11:38.923145    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.396136    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:40 old-k8s-version-075400 kubelet[1653]: E0415 19:11:40.139646    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.400137    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:55 old-k8s-version-075400 kubelet[1653]: E0415 19:11:55.520427    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.404126    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:00 old-k8s-version-075400 kubelet[1653]: E0415 19:12:00.414418    1653 pod_workers.go:191] Error syncing pod bc788811-26ce-487a-ba75-ce0fe2ecbb60 ("storage-provisioner_kube-system(bc788811-26ce-487a-ba75-ce0fe2ecbb60)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc788811-26ce-487a-ba75-ce0fe2ecbb60)"
	W0415 19:16:59.404126    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:08 old-k8s-version-075400 kubelet[1653]: E0415 19:12:08.317183    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.407131    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:19 old-k8s-version-075400 kubelet[1653]: E0415 19:12:19.120308    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.408196    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:19 old-k8s-version-075400 kubelet[1653]: E0415 19:12:19.884729    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.410132    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:21 old-k8s-version-075400 kubelet[1653]: E0415 19:12:21.368820    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.413121    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:31 old-k8s-version-075400 kubelet[1653]: E0415 19:12:31.804593    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.413121    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:33 old-k8s-version-075400 kubelet[1653]: E0415 19:12:33.314894    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.413121    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:45 old-k8s-version-075400 kubelet[1653]: E0415 19:12:45.315528    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.413121    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:46 old-k8s-version-075400 kubelet[1653]: E0415 19:12:46.316103    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.415773    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:58 old-k8s-version-075400 kubelet[1653]: E0415 19:12:58.858560    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.416364    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:00 old-k8s-version-075400 kubelet[1653]: E0415 19:13:00.312864    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.417062    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:09 old-k8s-version-075400 kubelet[1653]: E0415 19:13:09.312646    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.418835    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:14 old-k8s-version-075400 kubelet[1653]: E0415 19:13:14.423669    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.418835    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:21 old-k8s-version-075400 kubelet[1653]: E0415 19:13:21.309832    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.418835    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:25 old-k8s-version-075400 kubelet[1653]: E0415 19:13:25.310091    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.419840    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:34 old-k8s-version-075400 kubelet[1653]: E0415 19:13:34.311220    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.419840    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:39 old-k8s-version-075400 kubelet[1653]: E0415 19:13:39.309519    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.421848    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:47 old-k8s-version-075400 kubelet[1653]: E0415 19:13:47.774939    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.421848    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:54 old-k8s-version-075400 kubelet[1653]: E0415 19:13:54.307787    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.421848    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:00 old-k8s-version-075400 kubelet[1653]: E0415 19:14:00.324109    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:07 old-k8s-version-075400 kubelet[1653]: E0415 19:14:07.308728    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:11 old-k8s-version-075400 kubelet[1653]: E0415 19:14:11.308825    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:19 old-k8s-version-075400 kubelet[1653]: E0415 19:14:19.306081    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:24 old-k8s-version-075400 kubelet[1653]: E0415 19:14:24.306865    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.422839    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:30 old-k8s-version-075400 kubelet[1653]: E0415 19:14:30.307010    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.423833    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:38 old-k8s-version-075400 kubelet[1653]: E0415 19:14:38.304934    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:45 old-k8s-version-075400 kubelet[1653]: E0415 19:14:45.363819    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:53 old-k8s-version-075400 kubelet[1653]: E0415 19:14:53.301644    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:57 old-k8s-version-075400 kubelet[1653]: E0415 19:14:57.304954    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:07 old-k8s-version-075400 kubelet[1653]: E0415 19:15:07.315392    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.425830    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:11 old-k8s-version-075400 kubelet[1653]: E0415 19:15:11.301274    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.426829    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:22 old-k8s-version-075400 kubelet[1653]: E0415 19:15:22.304799    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.428838    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:23 old-k8s-version-075400 kubelet[1653]: E0415 19:15:23.511199    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:16:59.428838    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:36 old-k8s-version-075400 kubelet[1653]: E0415 19:15:36.301124    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.428838    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:38 old-k8s-version-075400 kubelet[1653]: E0415 19:15:38.301726    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:47 old-k8s-version-075400 kubelet[1653]: E0415 19:15:47.298701    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:50 old-k8s-version-075400 kubelet[1653]: E0415 19:15:50.298915    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298116    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298347    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.429833    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:15 old-k8s-version-075400 kubelet[1653]: E0415 19:16:15.317907    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.430829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:16 old-k8s-version-075400 kubelet[1653]: E0415 19:16:16.301254    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.430829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:27 old-k8s-version-075400 kubelet[1653]: E0415 19:16:27.297988    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.430829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:28 old-k8s-version-075400 kubelet[1653]: E0415 19:16:28.299384    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.430829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.296089    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.431829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.431829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:16:59.431829    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0415 19:16:59.431829    6796 logs.go:123] Gathering logs for kube-apiserver [57985fd9aaa0] ...
	I0415 19:16:59.431829    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57985fd9aaa0"
	I0415 19:16:59.514484    6796 logs.go:123] Gathering logs for storage-provisioner [571b76208459] ...
	I0415 19:16:59.514484    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571b76208459"
	I0415 19:16:59.558800    6796 logs.go:123] Gathering logs for kubernetes-dashboard [3733b3515cc0] ...
	I0415 19:16:59.558895    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3733b3515cc0"
	I0415 19:16:59.604854    6796 logs.go:123] Gathering logs for kube-proxy [99e1c3d6c49a] ...
	I0415 19:16:59.604854    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99e1c3d6c49a"
	I0415 19:16:59.662985    6796 logs.go:123] Gathering logs for kube-controller-manager [83c733fc4b92] ...
	I0415 19:16:59.663104    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83c733fc4b92"
	I0415 19:16:59.731763    6796 logs.go:123] Gathering logs for etcd [348005d1b56d] ...
	I0415 19:16:59.731763    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348005d1b56d"
	I0415 19:16:59.792382    6796 logs.go:123] Gathering logs for etcd [21a6d7992112] ...
	I0415 19:16:59.792382    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a6d7992112"
	I0415 19:16:59.858510    6796 logs.go:123] Gathering logs for coredns [11235e2fd801] ...
	I0415 19:16:59.858510    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11235e2fd801"
	I0415 19:16:59.920526    6796 logs.go:123] Gathering logs for kube-controller-manager [2a2a08cf9f78] ...
	I0415 19:16:59.920526    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2a08cf9f78"
	I0415 19:16:59.991499    6796 logs.go:123] Gathering logs for dmesg ...
	I0415 19:16:59.991499    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 19:17:00.029972    6796 logs.go:123] Gathering logs for describe nodes ...
	I0415 19:17:00.029972    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 19:17:00.305282    6796 logs.go:123] Gathering logs for kube-proxy [665d0d639f5b] ...
	I0415 19:17:00.305282    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 665d0d639f5b"
	I0415 19:17:00.372557    6796 logs.go:123] Gathering logs for storage-provisioner [4cc2abc051be] ...
	I0415 19:17:00.372557    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cc2abc051be"
	I0415 19:17:00.434488    6796 logs.go:123] Gathering logs for Docker ...
	I0415 19:17:00.434488    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 19:17:00.480478    6796 logs.go:123] Gathering logs for kube-scheduler [8edd897fcde4] ...
	I0415 19:17:00.480478    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8edd897fcde4"
	I0415 19:17:00.533492    6796 logs.go:123] Gathering logs for kube-scheduler [29ce0c645316] ...
	I0415 19:17:00.534483    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29ce0c645316"
	I0415 19:17:00.590088    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:17:00.590088    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 19:17:00.590088    6796 out.go:239] X Problems detected in kubelet:
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:28 old-k8s-version-075400 kubelet[1653]: E0415 19:16:28.299384    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.296089    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:00.590088    6796 out.go:239]   Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0415 19:17:00.590088    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:17:00.591087    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:17:10.630553    6796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 19:17:10.662391    6796 api_server.go:72] duration metric: took 5m50.6258279s to wait for apiserver process to appear ...
	I0415 19:17:10.662391    6796 api_server.go:88] waiting for apiserver healthz status ...
	I0415 19:17:10.672384    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0415 19:17:10.720882    6796 logs.go:276] 2 containers: [57985fd9aaa0 625287f80046]
	I0415 19:17:10.731881    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0415 19:17:10.770047    6796 logs.go:276] 2 containers: [348005d1b56d 21a6d7992112]
	I0415 19:17:10.780391    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0415 19:17:10.821766    6796 logs.go:276] 2 containers: [11235e2fd801 3a281f046968]
	I0415 19:17:10.831763    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0415 19:17:10.871247    6796 logs.go:276] 2 containers: [8edd897fcde4 29ce0c645316]
	I0415 19:17:10.882722    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0415 19:17:10.928528    6796 logs.go:276] 2 containers: [99e1c3d6c49a 665d0d639f5b]
	I0415 19:17:10.936514    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0415 19:17:10.978254    6796 logs.go:276] 2 containers: [2a2a08cf9f78 83c733fc4b92]
	I0415 19:17:10.988270    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0415 19:17:11.034097    6796 logs.go:276] 0 containers: []
	W0415 19:17:11.034097    6796 logs.go:278] No container was found matching "kindnet"
	I0415 19:17:11.043287    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0415 19:17:11.091656    6796 logs.go:276] 1 containers: [3733b3515cc0]
	I0415 19:17:11.100652    6796 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0415 19:17:11.142834    6796 logs.go:276] 2 containers: [4cc2abc051be 571b76208459]
	I0415 19:17:11.142834    6796 logs.go:123] Gathering logs for dmesg ...
	I0415 19:17:11.142834    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0415 19:17:11.168829    6796 logs.go:123] Gathering logs for kube-scheduler [8edd897fcde4] ...
	I0415 19:17:11.168829    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8edd897fcde4"
	I0415 19:17:11.219829    6796 logs.go:123] Gathering logs for kube-scheduler [29ce0c645316] ...
	I0415 19:17:11.219829    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29ce0c645316"
	I0415 19:17:11.287173    6796 logs.go:123] Gathering logs for kube-proxy [99e1c3d6c49a] ...
	I0415 19:17:11.287173    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99e1c3d6c49a"
	I0415 19:17:11.332866    6796 logs.go:123] Gathering logs for kube-controller-manager [83c733fc4b92] ...
	I0415 19:17:11.332866    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83c733fc4b92"
	I0415 19:17:11.401585    6796 logs.go:123] Gathering logs for kube-apiserver [57985fd9aaa0] ...
	I0415 19:17:11.401585    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57985fd9aaa0"
	I0415 19:17:11.470592    6796 logs.go:123] Gathering logs for kube-controller-manager [2a2a08cf9f78] ...
	I0415 19:17:11.470592    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2a08cf9f78"
	I0415 19:17:11.534523    6796 logs.go:123] Gathering logs for kubernetes-dashboard [3733b3515cc0] ...
	I0415 19:17:11.534523    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3733b3515cc0"
	I0415 19:17:11.581530    6796 logs.go:123] Gathering logs for storage-provisioner [571b76208459] ...
	I0415 19:17:11.581530    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571b76208459"
	I0415 19:17:11.625531    6796 logs.go:123] Gathering logs for container status ...
	I0415 19:17:11.625531    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0415 19:17:11.722523    6796 logs.go:123] Gathering logs for kubelet ...
	I0415 19:17:11.722523    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0415 19:17:11.804549    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:38 old-k8s-version-075400 kubelet[1653]: E0415 19:11:38.208713    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.806547    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:38 old-k8s-version-075400 kubelet[1653]: E0415 19:11:38.923145    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.807548    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:40 old-k8s-version-075400 kubelet[1653]: E0415 19:11:40.139646    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.809544    6796 logs.go:138] Found kubelet problem: Apr 15 19:11:55 old-k8s-version-075400 kubelet[1653]: E0415 19:11:55.520427    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.813545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:00 old-k8s-version-075400 kubelet[1653]: E0415 19:12:00.414418    1653 pod_workers.go:191] Error syncing pod bc788811-26ce-487a-ba75-ce0fe2ecbb60 ("storage-provisioner_kube-system(bc788811-26ce-487a-ba75-ce0fe2ecbb60)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc788811-26ce-487a-ba75-ce0fe2ecbb60)"
	W0415 19:17:11.813545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:08 old-k8s-version-075400 kubelet[1653]: E0415 19:12:08.317183    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.816945    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:19 old-k8s-version-075400 kubelet[1653]: E0415 19:12:19.120308    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.817530    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:19 old-k8s-version-075400 kubelet[1653]: E0415 19:12:19.884729    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.819593    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:21 old-k8s-version-075400 kubelet[1653]: E0415 19:12:21.368820    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.822545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:31 old-k8s-version-075400 kubelet[1653]: E0415 19:12:31.804593    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.822545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:33 old-k8s-version-075400 kubelet[1653]: E0415 19:12:33.314894    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.822545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:45 old-k8s-version-075400 kubelet[1653]: E0415 19:12:45.315528    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.822545    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:46 old-k8s-version-075400 kubelet[1653]: E0415 19:12:46.316103    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.824553    6796 logs.go:138] Found kubelet problem: Apr 15 19:12:58 old-k8s-version-075400 kubelet[1653]: E0415 19:12:58.858560    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.825543    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:00 old-k8s-version-075400 kubelet[1653]: E0415 19:13:00.312864    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.825543    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:09 old-k8s-version-075400 kubelet[1653]: E0415 19:13:09.312646    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.827554    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:14 old-k8s-version-075400 kubelet[1653]: E0415 19:13:14.423669    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.827554    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:21 old-k8s-version-075400 kubelet[1653]: E0415 19:13:21.309832    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.828542    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:25 old-k8s-version-075400 kubelet[1653]: E0415 19:13:25.310091    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.828542    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:34 old-k8s-version-075400 kubelet[1653]: E0415 19:13:34.311220    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.828542    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:39 old-k8s-version-075400 kubelet[1653]: E0415 19:13:39.309519    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.832559    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:47 old-k8s-version-075400 kubelet[1653]: E0415 19:13:47.774939    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.833561    6796 logs.go:138] Found kubelet problem: Apr 15 19:13:54 old-k8s-version-075400 kubelet[1653]: E0415 19:13:54.307787    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.833561    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:00 old-k8s-version-075400 kubelet[1653]: E0415 19:14:00.324109    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.834553    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:07 old-k8s-version-075400 kubelet[1653]: E0415 19:14:07.308728    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.834553    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:11 old-k8s-version-075400 kubelet[1653]: E0415 19:14:11.308825    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.834553    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:19 old-k8s-version-075400 kubelet[1653]: E0415 19:14:19.306081    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.835549    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:24 old-k8s-version-075400 kubelet[1653]: E0415 19:14:24.306865    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.835549    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:30 old-k8s-version-075400 kubelet[1653]: E0415 19:14:30.307010    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.836532    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:38 old-k8s-version-075400 kubelet[1653]: E0415 19:14:38.304934    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.838543    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:45 old-k8s-version-075400 kubelet[1653]: E0415 19:14:45.363819    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0415 19:17:11.838543    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:53 old-k8s-version-075400 kubelet[1653]: E0415 19:14:53.301644    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.838543    6796 logs.go:138] Found kubelet problem: Apr 15 19:14:57 old-k8s-version-075400 kubelet[1653]: E0415 19:14:57.304954    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.839535    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:07 old-k8s-version-075400 kubelet[1653]: E0415 19:15:07.315392    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.839535    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:11 old-k8s-version-075400 kubelet[1653]: E0415 19:15:11.301274    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.839535    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:22 old-k8s-version-075400 kubelet[1653]: E0415 19:15:22.304799    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.841537    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:23 old-k8s-version-075400 kubelet[1653]: E0415 19:15:23.511199    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:36 old-k8s-version-075400 kubelet[1653]: E0415 19:15:36.301124    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:38 old-k8s-version-075400 kubelet[1653]: E0415 19:15:38.301726    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:47 old-k8s-version-075400 kubelet[1653]: E0415 19:15:47.298701    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:15:50 old-k8s-version-075400 kubelet[1653]: E0415 19:15:50.298915    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298116    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.842538    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298347    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.843542    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:15 old-k8s-version-075400 kubelet[1653]: E0415 19:16:15.317907    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.843542    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:16 old-k8s-version-075400 kubelet[1653]: E0415 19:16:16.301254    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.843542    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:27 old-k8s-version-075400 kubelet[1653]: E0415 19:16:27.297988    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.843542    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:28 old-k8s-version-075400 kubelet[1653]: E0415 19:16:28.299384    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.296089    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.844537    6796 logs.go:138] Found kubelet problem: Apr 15 19:17:04 old-k8s-version-075400 kubelet[1653]: E0415 19:17:04.293137    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:11.845547    6796 logs.go:138] Found kubelet problem: Apr 15 19:17:09 old-k8s-version-075400 kubelet[1653]: E0415 19:17:09.296445    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0415 19:17:11.845547    6796 logs.go:123] Gathering logs for etcd [348005d1b56d] ...
	I0415 19:17:11.845547    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348005d1b56d"
	I0415 19:17:11.914532    6796 logs.go:123] Gathering logs for coredns [11235e2fd801] ...
	I0415 19:17:11.914532    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11235e2fd801"
	I0415 19:17:11.959547    6796 logs.go:123] Gathering logs for coredns [3a281f046968] ...
	I0415 19:17:11.959547    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a281f046968"
	I0415 19:17:12.022551    6796 logs.go:123] Gathering logs for kube-proxy [665d0d639f5b] ...
	I0415 19:17:12.022551    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 665d0d639f5b"
	I0415 19:17:12.070550    6796 logs.go:123] Gathering logs for describe nodes ...
	I0415 19:17:12.070550    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0415 19:17:12.302147    6796 logs.go:123] Gathering logs for kube-apiserver [625287f80046] ...
	I0415 19:17:12.302147    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 625287f80046"
	I0415 19:17:12.416138    6796 logs.go:123] Gathering logs for etcd [21a6d7992112] ...
	I0415 19:17:12.416138    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a6d7992112"
	I0415 19:17:12.474702    6796 logs.go:123] Gathering logs for storage-provisioner [4cc2abc051be] ...
	I0415 19:17:12.474702    6796 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cc2abc051be"
	I0415 19:17:12.532328    6796 logs.go:123] Gathering logs for Docker ...
	I0415 19:17:12.532328    6796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0415 19:17:12.579365    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:17:12.579365    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 19:17:12.579365    6796 out.go:239] X Problems detected in kubelet:
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:17:04 old-k8s-version-075400 kubelet[1653]: E0415 19:17:04.293137    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0415 19:17:12.579365    6796 out.go:239]   Apr 15 19:17:09 old-k8s-version-075400 kubelet[1653]: E0415 19:17:09.296445    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0415 19:17:12.582645    6796 out.go:304] Setting ErrFile to fd 1908...
	I0415 19:17:12.582645    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 19:17:22.586242    6796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0415 19:17:22.607138    6796 api_server.go:279] https://127.0.0.1:56607/healthz returned 200:
	ok
	I0415 19:17:22.611198    6796 out.go:177] 
	W0415 19:17:22.613051    6796 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0415 19:17:22.613051    6796 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0415 19:17:22.613051    6796 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0415 19:17:22.613051    6796 out.go:239] * 
	W0415 19:17:22.614067    6796 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 19:17:22.616694    6796 out.go:177] 
	
	
	==> Docker <==
	Apr 15 19:16:59 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:16:59 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:16:59 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:16:59 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:16:59 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:16:59 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:00 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:00 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:00 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:00 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:11 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:12 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:12 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:12 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:12 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 15 19:17:12 old-k8s-version-075400 dockerd[1303]: 2024/04/15 19:17:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3733b3515cc0e       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   79cc7f41c3eb2       kubernetes-dashboard-cd95d586-pk692
	4cc2abc051be6       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   15f61fbe62d6d       storage-provisioner
	11235e2fd801d       bfe3a36ebd252                                                                                         5 minutes ago       Running             coredns                   1                   dd1fde6eea5ac       coredns-74ff55c5b-b2jzn
	98bfc4173c056       56cc512116c8f                                                                                         5 minutes ago       Running             busybox                   1                   ed0c34d2536c6       busybox
	99e1c3d6c49aa       10cc881966cfd                                                                                         5 minutes ago       Running             kube-proxy                1                   64dfaeacd26a5       kube-proxy-td2vz
	571b762084598       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   15f61fbe62d6d       storage-provisioner
	348005d1b56d0       0369cf4303ffd                                                                                         6 minutes ago       Running             etcd                      1                   5914ea3b8cf5a       etcd-old-k8s-version-075400
	2a2a08cf9f78d       b9fa1895dcaa6                                                                                         6 minutes ago       Running             kube-controller-manager   1                   d3037f5fed957       kube-controller-manager-old-k8s-version-075400
	8edd897fcde4b       3138b6e3d4712                                                                                         6 minutes ago       Running             kube-scheduler            1                   b4fb32c203e9a       kube-scheduler-old-k8s-version-075400
	57985fd9aaa0e       ca9843d3b5454                                                                                         6 minutes ago       Running             kube-apiserver            1                   290799c33ef6d       kube-apiserver-old-k8s-version-075400
	96f0cd179df1b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              busybox                   0                   c2e0a5616649a       busybox
	3a281f046968c       bfe3a36ebd252                                                                                         8 minutes ago       Exited              coredns                   0                   b62ae5a2c6238       coredns-74ff55c5b-b2jzn
	665d0d639f5b5       10cc881966cfd                                                                                         8 minutes ago       Exited              kube-proxy                0                   9a336dee4601b       kube-proxy-td2vz
	625287f80046d       ca9843d3b5454                                                                                         9 minutes ago       Exited              kube-apiserver            0                   7083eb2001517       kube-apiserver-old-k8s-version-075400
	21a6d7992112a       0369cf4303ffd                                                                                         9 minutes ago       Exited              etcd                      0                   47e0bdcf0dcd6       etcd-old-k8s-version-075400
	83c733fc4b929       b9fa1895dcaa6                                                                                         9 minutes ago       Exited              kube-controller-manager   0                   a224b3acfad79       kube-controller-manager-old-k8s-version-075400
	29ce0c6453161       3138b6e3d4712                                                                                         9 minutes ago       Exited              kube-scheduler            0                   869c05dd79661       kube-scheduler-old-k8s-version-075400
	
	
	==> coredns [11235e2fd801] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:58409 - 35470 "HINFO IN 6891221381046574577.82580001415956963. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.088162351s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0415 19:12:00.645863       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-15 19:11:39.614247713 +0000 UTC m=+0.181874447) (total time: 21.034038251s):
	Trace[1427131847]: [21.034038251s] [21.034038251s] END
	I0415 19:12:00.645900       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-15 19:11:39.614247813 +0000 UTC m=+0.181874447) (total time: 21.034091559s):
	Trace[939984059]: [21.034091559s] [21.034091559s] END
	I0415 19:12:00.645909       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-15 19:11:39.614286018 +0000 UTC m=+0.181913052) (total time: 21.033831722s):
	Trace[2019727887]: [21.033831722s] [21.033831722s] END
	E0415 19:12:00.645917       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0415 19:12:00.645931       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0415 19:12:00.645917       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [3a281f046968] <==
	I0415 19:09:02.483616       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-15 19:08:41.425043781 +0000 UTC m=+0.090834341) (total time: 21.060854193s):
	Trace[2019727887]: [21.060854193s] [21.060854193s] END
	I0415 19:09:02.483761       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-15 19:08:41.425210803 +0000 UTC m=+0.091001263) (total time: 21.06082979s):
	Trace[1427131847]: [21.06082979s] [21.06082979s] END
	E0415 19:09:02.483788       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0415 19:09:02.483835       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0415 19:09:02.483943       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-15 19:08:41.425366023 +0000 UTC m=+0.091156583) (total time: 21.060928603s):
	Trace[911902081]: [21.060928603s] [21.060928603s] END
	E0415 19:09:02.483961       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0415 19:10:15.515760       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=199&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E0415 19:10:15.515832       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=589&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E0415 19:10:15.515871       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=573&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	[INFO] Reloading complete
	[INFO] 127.0.0.1:50857 - 46058 "HINFO IN 5166317581723099572.3717691957148941195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.094605146s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-075400
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-075400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d13765a01eff8f8dc84b5d3ffa5d00e863a8f5c
	                    minikube.k8s.io/name=old-k8s-version-075400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T19_08_20_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 19:08:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-075400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 19:17:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 19:12:54 +0000   Mon, 15 Apr 2024 19:08:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 19:12:54 +0000   Mon, 15 Apr 2024 19:08:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 19:12:54 +0000   Mon, 15 Apr 2024 19:08:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 19:12:54 +0000   Mon, 15 Apr 2024 19:08:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-075400
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 14e84ac187a943a1bd5cdf058ae3e257
	  System UUID:                14e84ac187a943a1bd5cdf058ae3e257
	  Boot ID:                    65f83766-a313-43df-830a-07de4d414c98
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 coredns-74ff55c5b-b2jzn                           100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m50s
	  kube-system                 etcd-old-k8s-version-075400                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 kube-apiserver-old-k8s-version-075400             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 kube-controller-manager-old-k8s-version-075400    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 kube-proxy-td2vz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m50s
	  kube-system                 kube-scheduler-old-k8s-version-075400             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 metrics-server-9975d5f86-8mprn                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m12s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m47s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-jxcp7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-pk692               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  9m31s (x7 over 9m34s)  kubelet     Node old-k8s-version-075400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x7 over 9m34s)  kubelet     Node old-k8s-version-075400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x6 over 9m34s)  kubelet     Node old-k8s-version-075400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m5s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m4s                   kubelet     Node old-k8s-version-075400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m4s                   kubelet     Node old-k8s-version-075400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m4s                   kubelet     Node old-k8s-version-075400 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m4s                   kubelet     Node old-k8s-version-075400 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m3s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m54s                  kubelet     Node old-k8s-version-075400 status is now: NodeReady
	  Normal  Starting                 8m45s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m8s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m8s)    kubelet     Node old-k8s-version-075400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m8s)    kubelet     Node old-k8s-version-075400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x7 over 6m8s)    kubelet     Node old-k8s-version-075400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m8s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Apr15 18:56] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [21a6d7992112] <==
	2024-04-15 19:08:37.920162 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:612" took too long (473.880774ms) to execute
	2024-04-15 19:08:39.847841 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3912" took too long (101.184991ms) to execute
	2024-04-15 19:08:40.032851 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-74ff55c5b-h486w\" " with result "range_response_count:1 size:4598" took too long (203.635546ms) to execute
	2024-04-15 19:08:40.034583 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-075400\" " with result "range_response_count:1 size:5248" took too long (113.971957ms) to execute
	2024-04-15 19:08:42.826731 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:08:52.825336 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:09:02.824340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:09:12.824554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:09:13.835299 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (167.051305ms) to execute
	2024-04-15 19:09:13.916793 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-74ff55c5b-b2jzn\" " with result "range_response_count:1 size:4457" took too long (153.332013ms) to execute
	2024-04-15 19:09:13.948404 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (105.717599ms) to execute
	2024-04-15 19:09:22.822043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:09:27.447629 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-74ff55c5b-h486w\" " with result "range_response_count:1 size:4598" took too long (952.060034ms) to execute
	2024-04-15 19:09:27.447895 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (784.967482ms) to execute
	2024-04-15 19:09:27.448148 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true " with result "range_response_count:0 size:5" took too long (968.291619ms) to execute
	2024-04-15 19:09:32.821890 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:09:39.646937 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-old-k8s-version-075400\" " with result "range_response_count:1 size:5255" took too long (147.93803ms) to execute
	2024-04-15 19:09:42.836086 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:09:52.820465 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:10:02.822441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:10:12.821697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:10:15.411865 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/04/15 19:10:15 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2024-04-15 19:10:15.426862 I | etcdserver: skipped leadership transfer for single voting member cluster
	WARNING: 2024/04/15 19:10:15 grpc: addrConn.createTransport failed to connect to {192.168.85.2:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.85.2:2379: connect: connection refused". Reconnecting...
	
	
	==> etcd [348005d1b56d] <==
	2024-04-15 19:15:22.556714 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-8mprn\" " with result "range_response_count:1 size:4052" took too long (5.122896875s) to execute
	2024-04-15 19:15:22.557296 W | etcdserver: request "header:<ID:9722584276906254579 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:06ed8ee32bcba0f2>" with result "size:41" took too long (1.747950013s) to execute
	2024-04-15 19:15:22.557800 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:5" took too long (3.335879535s) to execute
	2024-04-15 19:15:22.558065 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1121" took too long (3.282041708s) to execute
	2024-04-15 19:15:22.558114 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.036359851s) to execute
	2024-04-15 19:15:22.558188 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (3.532837279s) to execute
	2024-04-15 19:15:22.558349 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:1 size:903" took too long (2.752409359s) to execute
	2024-04-15 19:15:23.273349 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (707.744167ms) to execute
	2024-04-15 19:15:23.273906 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-075400\" " with result "range_response_count:1 size:5502" took too long (702.886842ms) to execute
	2024-04-15 19:15:23.273990 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (693.865181ms) to execute
	2024-04-15 19:15:23.274434 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (688.527495ms) to execute
	2024-04-15 19:15:26.220372 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:15:36.220533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:15:46.220388 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:15:56.218095 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:16:06.218435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:16:16.218354 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:16:26.216886 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:16:36.215762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:16:46.215807 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:16:56.212863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:17:02.643948 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (153.810632ms) to execute
	2024-04-15 19:17:06.214811 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:17:16.213428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-15 19:17:26.211127 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 19:17:26 up  6:57,  0 users,  load average: 4.50, 5.81, 6.18
	Linux old-k8s-version-075400 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [57985fd9aaa0] <==
	I0415 19:15:23.274418       1 trace.go:205] Trace[2040134106]: "GuaranteedUpdate etcd3" type:*core.Endpoints (15-Apr-2024 19:15:22.583) (total time: 690ms):
	Trace[2040134106]: ---"Transaction committed" 688ms (19:15:00.274)
	Trace[2040134106]: [690.799487ms] [690.799487ms] END
	I0415 19:15:23.274702       1 trace.go:205] Trace[1871594639]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.85.2 (15-Apr-2024 19:15:22.583) (total time: 691ms):
	Trace[1871594639]: ---"Object stored in database" 691ms (19:15:00.274)
	Trace[1871594639]: [691.643896ms] [691.643896ms] END
	I0415 19:15:23.274997       1 trace.go:205] Trace[1603661697]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/af46c47,client:::1 (15-Apr-2024 19:15:22.563) (total time: 711ms):
	Trace[1603661697]: ---"About to write a response" 710ms (19:15:00.274)
	Trace[1603661697]: [711.050593ms] [711.050593ms] END
	I0415 19:15:23.276078       1 trace.go:205] Trace[1832953080]: "Get" url:/api/v1/nodes/old-k8s-version-075400,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,client:192.168.85.1 (15-Apr-2024 19:15:22.569) (total time: 706ms):
	Trace[1832953080]: ---"About to write a response" 705ms (19:15:00.275)
	Trace[1832953080]: [706.344387ms] [706.344387ms] END
	I0415 19:15:51.724571       1 client.go:360] parsed scheme: "passthrough"
	I0415 19:15:51.724721       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0415 19:15:51.724735       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0415 19:16:29.218977       1 client.go:360] parsed scheme: "passthrough"
	I0415 19:16:29.219141       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0415 19:16:29.219153       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0415 19:16:33.705170       1 handler_proxy.go:102] no RequestInfo found in the context
	E0415 19:16:33.705496       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0415 19:16:33.705511       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0415 19:17:13.570828       1 client.go:360] parsed scheme: "passthrough"
	I0415 19:17:13.571102       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0415 19:17:13.571125       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [625287f80046] <==
	W0415 19:10:24.743229       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:24.810723       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:24.811812       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:24.822494       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:24.853439       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:24.876558       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:24.929303       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:24.930360       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:24.943610       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.113585       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.224036       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.273426       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.281924       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.285264       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.293021       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.311961       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.319262       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.319485       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.324893       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.336332       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.346049       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.356553       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.397097       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.430222       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0415 19:10:25.511177       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-controller-manager [2a2a08cf9f78] <==
	W0415 19:13:00.813568       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0415 19:13:26.907257       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0415 19:13:32.462095       1 request.go:655] Throttling request took 1.046770372s, request: GET:https://192.168.85.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
	W0415 19:13:33.314530       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0415 19:13:57.401373       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0415 19:14:04.963052       1 request.go:655] Throttling request took 1.047689675s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0415 19:14:05.814734       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0415 19:14:27.903320       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0415 19:14:37.462787       1 request.go:655] Throttling request took 1.047186128s, request: GET:https://192.168.85.2:8443/apis/storage.k8s.io/v1?timeout=32s
	W0415 19:14:38.316775       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0415 19:14:58.420255       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0415 19:15:09.964340       1 request.go:655] Throttling request took 1.047436417s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0415 19:15:10.817265       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0415 19:15:28.923561       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0415 19:15:42.467894       1 request.go:655] Throttling request took 1.047020367s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0415 19:15:43.320237       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0415 19:15:59.423683       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0415 19:16:14.968415       1 request.go:655] Throttling request took 1.046705636s, request: GET:https://192.168.85.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0415 19:16:15.820873       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0415 19:16:29.924924       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0415 19:16:47.466697       1 request.go:655] Throttling request took 1.046366432s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0415 19:16:48.320059       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0415 19:17:00.427526       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0415 19:17:19.969454       1 request.go:655] Throttling request took 1.047481369s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0415 19:17:20.821941       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [83c733fc4b92] <==
	I0415 19:08:36.118428       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0415 19:08:36.124585       1 range_allocator.go:373] Set node old-k8s-version-075400 PodCIDR to [10.244.0.0/24]
	I0415 19:08:36.126294       1 shared_informer.go:247] Caches are synced for expand 
	I0415 19:08:36.127241       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0415 19:08:36.127456       1 shared_informer.go:247] Caches are synced for resource quota 
	I0415 19:08:36.127463       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0415 19:08:36.130254       1 shared_informer.go:247] Caches are synced for resource quota 
	I0415 19:08:36.132614       1 shared_informer.go:247] Caches are synced for stateful set 
	I0415 19:08:36.328951       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-h486w"
	I0415 19:08:36.329002       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-td2vz"
	I0415 19:08:36.331199       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0415 19:08:36.444509       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-b2jzn"
	E0415 19:08:36.444816       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0415 19:08:36.629379       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0415 19:08:36.629412       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0415 19:08:36.631623       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0415 19:08:39.724028       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0415 19:08:39.822400       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-h486w"
	I0415 19:10:12.954080       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0415 19:10:12.983849       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0415 19:10:13.097962       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0415 19:10:13.166911       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0415 19:10:13.215405       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0415 19:10:13.270076       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0415 19:10:14.127501       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-8mprn"
	
	
	==> kube-proxy [665d0d639f5b] <==
	W0415 19:08:41.081827       1 proxier.go:651] Failed to read file /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:08:41.118363       1 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:08:41.126372       1 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:08:41.133603       1 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:08:41.137483       1 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:08:41.141615       1 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	I0415 19:08:41.171144       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0415 19:08:41.171247       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0415 19:08:41.288741       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0415 19:08:41.289090       1 server_others.go:185] Using iptables Proxier.
	I0415 19:08:41.289854       1 server.go:650] Version: v1.20.0
	I0415 19:08:41.294287       1 config.go:224] Starting endpoint slice config controller
	I0415 19:08:41.294382       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0415 19:08:41.294412       1 config.go:315] Starting service config controller
	I0415 19:08:41.294416       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0415 19:08:41.418624       1 shared_informer.go:247] Caches are synced for service config 
	I0415 19:08:41.418624       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [99e1c3d6c49a] <==
	W0415 19:11:39.128829       1 proxier.go:651] Failed to read file /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:11:39.132675       1 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:11:39.136353       1 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:11:39.139330       1 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:11:39.143184       1 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0415 19:11:39.146205       1 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	I0415 19:11:39.224919       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0415 19:11:39.225077       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0415 19:11:39.346570       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0415 19:11:39.346871       1 server_others.go:185] Using iptables Proxier.
	I0415 19:11:39.348812       1 server.go:650] Version: v1.20.0
	I0415 19:11:39.405249       1 config.go:315] Starting service config controller
	I0415 19:11:39.405446       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0415 19:11:39.405624       1 config.go:224] Starting endpoint slice config controller
	I0415 19:11:39.405648       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0415 19:11:39.505872       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0415 19:11:39.505973       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [29ce0c645316] <==
	E0415 19:08:13.546616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 19:08:13.546720       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 19:08:13.624040       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 19:08:13.624049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 19:08:13.624627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 19:08:14.374864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 19:08:14.447946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 19:08:14.466968       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 19:08:14.488954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 19:08:14.578330       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 19:08:14.625577       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 19:08:14.788145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 19:08:14.788190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 19:08:14.882518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 19:08:14.901561       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 19:08:14.941182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 19:08:15.212259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 19:08:16.299980       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 19:08:16.497842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 19:08:16.567609       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 19:08:16.789607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 19:08:16.993890       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 19:08:17.006508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 19:08:17.108951       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0415 19:08:21.237954       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [8edd897fcde4] <==
	I0415 19:11:26.722308       1 serving.go:331] Generated self-signed cert in-memory
	W0415 19:11:32.713093       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0415 19:11:32.713149       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 19:11:32.713165       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0415 19:11:32.713173       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0415 19:11:33.110272       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0415 19:11:33.110643       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 19:11:33.110660       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0415 19:11:33.110691       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0415 19:11:33.411803       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 15 19:15:23 old-k8s-version-075400 kubelet[1653]: E0415 19:15:23.510752    1653 remote_image.go:113] PullImage "registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Apr 15 19:15:23 old-k8s-version-075400 kubelet[1653]: E0415 19:15:23.510913    1653 kuberuntime_image.go:51] Pull image "registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Apr 15 19:15:23 old-k8s-version-075400 kubelet[1653]: E0415 19:15:23.511153    1653 kuberuntime_manager.go:829] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubernetes-dashboard-token-5cnx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,Terminatio
nMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740): ErrImagePull: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Apr 15 19:15:23 old-k8s-version-075400 kubelet[1653]: E0415 19:15:23.511199    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 15 19:15:36 old-k8s-version-075400 kubelet[1653]: E0415 19:15:36.301124    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:15:38 old-k8s-version-075400 kubelet[1653]: E0415 19:15:38.301726    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:15:47 old-k8s-version-075400 kubelet[1653]: E0415 19:15:47.298701    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:15:50 old-k8s-version-075400 kubelet[1653]: E0415 19:15:50.298915    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298116    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:02 old-k8s-version-075400 kubelet[1653]: E0415 19:16:02.298347    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:15 old-k8s-version-075400 kubelet[1653]: E0415 19:16:15.317907    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:16 old-k8s-version-075400 kubelet[1653]: E0415 19:16:16.301254    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:18 old-k8s-version-075400 kubelet[1653]: W0415 19:16:18.410126    1653 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Apr 15 19:16:18 old-k8s-version-075400 kubelet[1653]: W0415 19:16:18.412123    1653 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	Apr 15 19:16:27 old-k8s-version-075400 kubelet[1653]: E0415 19:16:27.297988    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:28 old-k8s-version-075400 kubelet[1653]: E0415 19:16:28.299384    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.296089    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:42 old-k8s-version-075400 kubelet[1653]: E0415 19:16:42.297293    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:53 old-k8s-version-075400 kubelet[1653]: E0415 19:16:53.293453    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:16:56 old-k8s-version-075400 kubelet[1653]: E0415 19:16:56.293967    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:17:04 old-k8s-version-075400 kubelet[1653]: E0415 19:17:04.293137    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:17:09 old-k8s-version-075400 kubelet[1653]: E0415 19:17:09.296445    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:17:15 old-k8s-version-075400 kubelet[1653]: E0415 19:17:15.292636    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 15 19:17:21 old-k8s-version-075400 kubelet[1653]: E0415 19:17:21.293251    1653 pod_workers.go:191] Error syncing pod 598e5f0f-09a9-421c-8573-b2494d744971 ("metrics-server-9975d5f86-8mprn_kube-system(598e5f0f-09a9-421c-8573-b2494d744971)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 15 19:17:26 old-k8s-version-075400 kubelet[1653]: E0415 19:17:26.311003    1653 pod_workers.go:191] Error syncing pod d726a245-e815-4bf5-85ba-210162e5a740 ("dashboard-metrics-scraper-8d5bb5db8-jxcp7_kubernetes-dashboard(d726a245-e815-4bf5-85ba-210162e5a740)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [3733b3515cc0] <==
	2024/04/15 19:12:18 Starting overwatch
	2024/04/15 19:12:18 Using namespace: kubernetes-dashboard
	2024/04/15 19:12:18 Using in-cluster config to connect to apiserver
	2024/04/15 19:12:18 Using secret token for csrf signing
	2024/04/15 19:12:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/04/15 19:12:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/04/15 19:12:18 Successful initial request to the apiserver, version: v1.20.0
	2024/04/15 19:12:18 Generating JWE encryption key
	2024/04/15 19:12:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/04/15 19:12:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/04/15 19:12:19 Initializing JWE encryption key from synchronized object
	2024/04/15 19:12:19 Creating in-cluster Sidecar client
	2024/04/15 19:12:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:12:19 Serving insecurely on HTTP port: 9090
	2024/04/15 19:12:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:13:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:13:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:14:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:14:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:15:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:15:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:16:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:16:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/15 19:17:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4cc2abc051be] <==
	I0415 19:12:15.462758       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 19:12:15.516745       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 19:12:15.517028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 19:12:33.055251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 19:12:33.055685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-075400_f2185479-bd3d-4ecb-8709-c5015bcdba08!
	I0415 19:12:33.056130       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0226cddb-dc49-4a58-871f-a452177693fb", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-075400_f2185479-bd3d-4ecb-8709-c5015bcdba08 became leader
	I0415 19:12:33.156831       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-075400_f2185479-bd3d-4ecb-8709-c5015bcdba08!
	
	
	==> storage-provisioner [571b76208459] <==
	I0415 19:11:38.608745       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0415 19:11:59.655822       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:17:24.616886    4944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-075400 -n old-k8s-version-075400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-075400 -n old-k8s-version-075400: (1.3145211s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-075400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-8mprn dashboard-metrics-scraper-8d5bb5db8-jxcp7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-075400 describe pod metrics-server-9975d5f86-8mprn dashboard-metrics-scraper-8d5bb5db8-jxcp7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-075400 describe pod metrics-server-9975d5f86-8mprn dashboard-metrics-scraper-8d5bb5db8-jxcp7: exit status 1 (441.7979ms)

                                                
                                                
** stderr ** 
	E0415 19:17:29.889012    3844 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0415 19:17:29.981065    3844 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0415 19:17:29.993460    3844 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0415 19:17:30.004356    3844 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	Error from server (NotFound): pods "metrics-server-9975d5f86-8mprn" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-jxcp7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-075400 describe pod metrics-server-9975d5f86-8mprn dashboard-metrics-scraper-8d5bb5db8-jxcp7: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (422.02s)

                                                
                                    

Test pass (313/345)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.46
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.77
9 TestDownloadOnly/v1.20.0/DeleteAll 2.48
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.68
12 TestDownloadOnly/v1.29.3/json-events 6.87
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.28
18 TestDownloadOnly/v1.29.3/DeleteAll 2.32
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 1.23
21 TestDownloadOnly/v1.30.0-rc.2/json-events 7.23
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.33
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 2.08
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 1.27
29 TestDownloadOnlyKic 3.66
30 TestBinaryMirror 3.39
31 TestOffline 165.83
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.26
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.25
36 TestAddons/Setup 525.29
40 TestAddons/parallel/InspektorGadget 15.11
41 TestAddons/parallel/MetricsServer 8
42 TestAddons/parallel/HelmTiller 32.52
44 TestAddons/parallel/CSI 85.44
45 TestAddons/parallel/Headlamp 25.37
46 TestAddons/parallel/CloudSpanner 7.24
47 TestAddons/parallel/LocalPath 33.32
48 TestAddons/parallel/NvidiaDevicePlugin 7.83
49 TestAddons/parallel/Yakd 5.07
52 TestAddons/serial/GCPAuth/Namespaces 0.38
53 TestAddons/StoppedEnableDisable 14.32
54 TestCertOptions 99.72
55 TestCertExpiration 307.69
56 TestDockerFlags 89.02
57 TestForceSystemdFlag 84.32
58 TestForceSystemdEnv 92.78
65 TestErrorSpam/start 3.89
66 TestErrorSpam/status 3.81
67 TestErrorSpam/pause 3.94
68 TestErrorSpam/unpause 4.84
69 TestErrorSpam/stop 20.61
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 79.03
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 43.19
76 TestFunctional/serial/KubeContext 0.13
77 TestFunctional/serial/KubectlGetPods 0.23
80 TestFunctional/serial/CacheCmd/cache/add_remote 6.63
81 TestFunctional/serial/CacheCmd/cache/add_local 4.21
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.26
83 TestFunctional/serial/CacheCmd/cache/list 0.26
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.17
85 TestFunctional/serial/CacheCmd/cache/cache_reload 5.2
86 TestFunctional/serial/CacheCmd/cache/delete 0.53
87 TestFunctional/serial/MinikubeKubectlCmd 0.48
89 TestFunctional/serial/ExtraConfig 49.19
90 TestFunctional/serial/ComponentHealth 0.18
91 TestFunctional/serial/LogsCmd 2.66
92 TestFunctional/serial/LogsFileCmd 2.79
93 TestFunctional/serial/InvalidService 6.01
97 TestFunctional/parallel/DryRun 3.22
98 TestFunctional/parallel/InternationalLanguage 1.37
99 TestFunctional/parallel/StatusCmd 5.35
104 TestFunctional/parallel/AddonsCmd 0.74
105 TestFunctional/parallel/PersistentVolumeClaim 61.19
107 TestFunctional/parallel/SSHCmd 2.68
108 TestFunctional/parallel/CpCmd 7.63
109 TestFunctional/parallel/MySQL 74.71
110 TestFunctional/parallel/FileSync 1.16
111 TestFunctional/parallel/CertSync 7.83
115 TestFunctional/parallel/NodeLabels 0.19
117 TestFunctional/parallel/NonActiveRuntimeDisabled 1.22
119 TestFunctional/parallel/License 3.12
120 TestFunctional/parallel/ServiceCmd/DeployApp 20.43
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.85
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 21.68
126 TestFunctional/parallel/Version/short 0.28
127 TestFunctional/parallel/Version/components 2.5
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.85
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.9
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.96
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.89
132 TestFunctional/parallel/ImageCommands/ImageBuild 9.31
133 TestFunctional/parallel/ImageCommands/Setup 4.36
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 13.67
135 TestFunctional/parallel/ServiceCmd/List 1.59
136 TestFunctional/parallel/ServiceCmd/JSONOutput 1.43
137 TestFunctional/parallel/ServiceCmd/HTTPS 15.02
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.21
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.91
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.04
146 TestFunctional/parallel/ServiceCmd/Format 15.03
147 TestFunctional/parallel/DockerEnv/powershell 9.14
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.8
149 TestFunctional/parallel/ServiceCmd/URL 15.02
150 TestFunctional/parallel/ImageCommands/ImageRemove 1.77
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.66
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.68
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.65
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6.03
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 6.72
156 TestFunctional/parallel/ProfileCmd/profile_not_create 2.54
157 TestFunctional/parallel/ProfileCmd/profile_list 2.64
158 TestFunctional/parallel/ProfileCmd/profile_json_output 1.94
159 TestFunctional/delete_addon-resizer_images 0.43
160 TestFunctional/delete_my-image_image 0.17
161 TestFunctional/delete_minikube_cached_images 0.17
165 TestMultiControlPlane/serial/StartCluster 229.29
166 TestMultiControlPlane/serial/DeployApp 14.09
167 TestMultiControlPlane/serial/PingHostFromPods 3.61
168 TestMultiControlPlane/serial/AddWorkerNode 58.46
169 TestMultiControlPlane/serial/NodeLabels 0.17
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 3.35
171 TestMultiControlPlane/serial/CopyFile 70.09
172 TestMultiControlPlane/serial/StopSecondaryNode 15.22
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 2.55
174 TestMultiControlPlane/serial/RestartSecondaryNode 61.09
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.49
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 261.31
177 TestMultiControlPlane/serial/DeleteSecondaryNode 22.42
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.33
179 TestMultiControlPlane/serial/StopCluster 37.46
180 TestMultiControlPlane/serial/RestartCluster 113.16
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.3
182 TestMultiControlPlane/serial/AddSecondaryNode 78.82
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 3.35
186 TestImageBuild/serial/Setup 67.58
187 TestImageBuild/serial/NormalBuild 3.93
188 TestImageBuild/serial/BuildWithBuildArg 2.85
189 TestImageBuild/serial/BuildWithDockerIgnore 2.03
190 TestImageBuild/serial/BuildWithSpecifiedDockerfile 2.45
194 TestJSONOutput/start/Command 83.01
195 TestJSONOutput/start/Audit 0
197 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Command 1.77
201 TestJSONOutput/pause/Audit 0
203 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Command 1.57
207 TestJSONOutput/unpause/Audit 0
209 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
212 TestJSONOutput/stop/Command 7.51
213 TestJSONOutput/stop/Audit 0
215 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
216 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
217 TestErrorJSONOutput 1.37
219 TestKicCustomNetwork/create_custom_network 77.31
220 TestKicCustomNetwork/use_default_bridge_network 77.65
221 TestKicExistingNetwork 78.91
222 TestKicCustomSubnet 78.21
223 TestKicStaticIP 79.94
224 TestMainNoArgs 0.24
225 TestMinikubeProfile 147
228 TestMountStart/serial/StartWithMountFirst 20.03
229 TestMountStart/serial/VerifyMountFirst 1.16
230 TestMountStart/serial/StartWithMountSecond 19.1
231 TestMountStart/serial/VerifyMountSecond 1.1
232 TestMountStart/serial/DeleteFirst 3.97
233 TestMountStart/serial/VerifyMountPostDelete 1.15
234 TestMountStart/serial/Stop 2.54
235 TestMountStart/serial/RestartStopped 13.35
236 TestMountStart/serial/VerifyMountPostStop 1.11
239 TestMultiNode/serial/FreshStart2Nodes 146.17
240 TestMultiNode/serial/DeployApp2Nodes 24.93
241 TestMultiNode/serial/PingHostFrom2Pods 2.55
242 TestMultiNode/serial/AddNode 53.12
243 TestMultiNode/serial/MultiNodeLabels 0.19
244 TestMultiNode/serial/ProfileList 1.75
245 TestMultiNode/serial/CopyFile 39.65
246 TestMultiNode/serial/StopNode 6.51
247 TestMultiNode/serial/StartAfterStop 20.31
248 TestMultiNode/serial/RestartKeepsNodes 107.27
249 TestMultiNode/serial/DeleteNode 13.35
250 TestMultiNode/serial/StopMultiNode 25.41
251 TestMultiNode/serial/RestartMultiNode 70.83
252 TestMultiNode/serial/ValidateNameConflict 74.64
256 TestPreload 161.24
257 TestScheduledStopWindows 138.14
261 TestInsufficientStorage 46.74
262 TestRunningBinaryUpgrade 244.71
264 TestKubernetesUpgrade 500.74
265 TestMissingContainerUpgrade 392.44
266 TestStoppedBinaryUpgrade/Setup 0.81
278 TestStoppedBinaryUpgrade/Upgrade 329.45
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.53
281 TestNoKubernetes/serial/StartWithK8s 188.09
282 TestNoKubernetes/serial/StartWithStopK8s 29.38
283 TestNoKubernetes/serial/Start 23.12
284 TestNoKubernetes/serial/VerifyK8sNotRunning 1.36
285 TestNoKubernetes/serial/ProfileList 8.43
286 TestNoKubernetes/serial/Stop 16.39
287 TestNoKubernetes/serial/StartNoArgs 18.05
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.43
297 TestPause/serial/Start 139.89
298 TestStoppedBinaryUpgrade/MinikubeLogs 6.02
299 TestPause/serial/SecondStartNoReconfiguration 46.41
300 TestPause/serial/Pause 6.53
301 TestPause/serial/VerifyStatus 1.39
302 TestPause/serial/Unpause 4.14
304 TestNetworkPlugins/group/auto/Start 95.44
305 TestNetworkPlugins/group/calico/Start 178.54
306 TestNetworkPlugins/group/auto/KubeletFlags 1.26
307 TestNetworkPlugins/group/auto/NetCatPod 17.63
308 TestNetworkPlugins/group/auto/DNS 0.35
309 TestNetworkPlugins/group/auto/Localhost 0.32
310 TestNetworkPlugins/group/auto/HairPin 0.32
311 TestNetworkPlugins/group/custom-flannel/Start 104.21
312 TestNetworkPlugins/group/calico/ControllerPod 6.03
313 TestNetworkPlugins/group/calico/KubeletFlags 1.4
314 TestNetworkPlugins/group/calico/NetCatPod 20.72
315 TestNetworkPlugins/group/false/Start 89.66
316 TestNetworkPlugins/group/calico/DNS 0.38
317 TestNetworkPlugins/group/calico/Localhost 0.36
318 TestNetworkPlugins/group/calico/HairPin 0.33
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.21
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 19.62
321 TestNetworkPlugins/group/custom-flannel/DNS 0.46
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.36
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.34
324 TestNetworkPlugins/group/kindnet/Start 125.84
325 TestNetworkPlugins/group/false/KubeletFlags 1.2
326 TestNetworkPlugins/group/false/NetCatPod 25.61
327 TestNetworkPlugins/group/false/DNS 0.44
328 TestNetworkPlugins/group/false/Localhost 0.34
329 TestNetworkPlugins/group/false/HairPin 0.35
330 TestNetworkPlugins/group/flannel/Start 125.62
331 TestNetworkPlugins/group/enable-default-cni/Start 100.79
332 TestNetworkPlugins/group/bridge/Start 114.09
333 TestNetworkPlugins/group/kindnet/ControllerPod 6.03
334 TestNetworkPlugins/group/kindnet/KubeletFlags 1.51
335 TestNetworkPlugins/group/kindnet/NetCatPod 18.67
336 TestNetworkPlugins/group/kindnet/DNS 0.44
337 TestNetworkPlugins/group/kindnet/Localhost 0.45
338 TestNetworkPlugins/group/kindnet/HairPin 0.41
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.32
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 18.69
341 TestNetworkPlugins/group/flannel/ControllerPod 6.02
342 TestNetworkPlugins/group/flannel/KubeletFlags 1.34
343 TestNetworkPlugins/group/flannel/NetCatPod 17.54
344 TestNetworkPlugins/group/enable-default-cni/DNS 0.4
345 TestNetworkPlugins/group/enable-default-cni/Localhost 0.38
346 TestNetworkPlugins/group/enable-default-cni/HairPin 0.4
347 TestNetworkPlugins/group/flannel/DNS 0.43
348 TestNetworkPlugins/group/flannel/Localhost 0.38
349 TestNetworkPlugins/group/flannel/HairPin 0.37
350 TestNetworkPlugins/group/kubenet/Start 103.14
351 TestNetworkPlugins/group/bridge/KubeletFlags 1.79
352 TestNetworkPlugins/group/bridge/NetCatPod 25.72
353 TestNetworkPlugins/group/bridge/DNS 0.44
354 TestNetworkPlugins/group/bridge/Localhost 0.38
355 TestNetworkPlugins/group/bridge/HairPin 0.51
357 TestStartStop/group/old-k8s-version/serial/FirstStart 210.25
359 TestStartStop/group/no-preload/serial/FirstStart 136.41
361 TestStartStop/group/embed-certs/serial/FirstStart 109.61
362 TestNetworkPlugins/group/kubenet/KubeletFlags 1.75
363 TestNetworkPlugins/group/kubenet/NetCatPod 39.39
364 TestNetworkPlugins/group/kubenet/DNS 0.44
365 TestNetworkPlugins/group/kubenet/Localhost 0.44
366 TestNetworkPlugins/group/kubenet/HairPin 0.39
367 TestStartStop/group/no-preload/serial/DeployApp 11.78
368 TestStartStop/group/embed-certs/serial/DeployApp 11.77
370 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 119.24
371 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.45
372 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.51
373 TestStartStop/group/no-preload/serial/Stop 14.65
374 TestStartStop/group/embed-certs/serial/Stop 13.44
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.14
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.28
377 TestStartStop/group/no-preload/serial/SecondStart 283.59
378 TestStartStop/group/embed-certs/serial/SecondStart 284.76
379 TestStartStop/group/old-k8s-version/serial/DeployApp 11.2
380 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.07
381 TestStartStop/group/old-k8s-version/serial/Stop 12.88
382 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.24
384 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.81
385 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.83
386 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.92
387 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.21
388 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 283.71
389 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
390 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.02
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.36
392 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.38
393 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.85
394 TestStartStop/group/no-preload/serial/Pause 9.59
395 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.87
396 TestStartStop/group/embed-certs/serial/Pause 9.83
398 TestStartStop/group/newest-cni/serial/FirstStart 81.21
399 TestStartStop/group/newest-cni/serial/DeployApp 0
400 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.97
401 TestStartStop/group/newest-cni/serial/Stop 7.63
402 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.15
403 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.02
404 TestStartStop/group/newest-cni/serial/SecondStart 33.02
405 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.41
406 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.91
407 TestStartStop/group/default-k8s-diff-port/serial/Pause 9.35
408 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.8
411 TestStartStop/group/newest-cni/serial/Pause 8.61
412 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.02
413 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.44
414 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.84
415 TestStartStop/group/old-k8s-version/serial/Pause 9.3
x
+
TestDownloadOnly/v1.20.0/json-events (11.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-211500 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-211500 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (11.4567987s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-211500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-211500: exit status 85 (772.6071ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-211500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |          |
	|         | -p download-only-211500        |                      |                   |                |                     |          |
	|         | --force --alsologtostderr      |                      |                   |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |          |
	|         | --container-runtime=docker     |                      |                   |                |                     |          |
	|         | --driver=docker                |                      |                   |                |                     |          |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 17:38:39
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 17:38:39.600405    9080 out.go:291] Setting OutFile to fd 656 ...
	I0415 17:38:39.600405    9080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:38:39.600405    9080 out.go:304] Setting ErrFile to fd 660...
	I0415 17:38:39.600405    9080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0415 17:38:39.613961    9080 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0415 17:38:39.624309    9080 out.go:298] Setting JSON to true
	I0415 17:38:39.627036    9080 start.go:129] hostinfo: {"hostname":"minikube4","uptime":19189,"bootTime":1713183529,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 17:38:39.627036    9080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:38:39.635280    9080 out.go:97] [download-only-211500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:38:39.635280    9080 notify.go:220] Checking for updates...
	W0415 17:38:39.635280    9080 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0415 17:38:39.637984    9080 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:38:39.640374    9080 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 17:38:39.642752    9080 out.go:169] MINIKUBE_LOCATION=18634
	I0415 17:38:39.645085    9080 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0415 17:38:39.648118    9080 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 17:38:39.649899    9080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:38:39.942388    9080 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:38:39.952170    9080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:38:41.208845    9080 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2566164s)
	I0415 17:38:41.209973    9080 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:80 SystemTime:2024-04-15 17:38:41.170513063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:38:41.214317    9080 out.go:97] Using the docker driver based on user configuration
	I0415 17:38:41.214406    9080 start.go:297] selected driver: docker
	I0415 17:38:41.214500    9080 start.go:901] validating driver "docker" against <nil>
	I0415 17:38:41.230911    9080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:38:41.556162    9080 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:82 SystemTime:2024-04-15 17:38:41.516030413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:38:41.556162    9080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 17:38:41.664960    9080 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0415 17:38:41.665920    9080 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 17:38:41.671811    9080 out.go:169] Using Docker Desktop driver with root privileges
	I0415 17:38:41.674343    9080 cni.go:84] Creating CNI manager for ""
	I0415 17:38:41.674343    9080 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 17:38:41.675132    9080 start.go:340] cluster config:
	{Name:download-only-211500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-211500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:38:41.678226    9080 out.go:97] Starting "download-only-211500" primary control-plane node in "download-only-211500" cluster
	I0415 17:38:41.678278    9080 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 17:38:41.680440    9080 out.go:97] Pulling base image v0.0.43-1713176859-18634 ...
	I0415 17:38:41.680517    9080 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 17:38:41.680575    9080 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 17:38:41.723143    9080 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 17:38:41.723270    9080 cache.go:56] Caching tarball of preloaded images
	I0415 17:38:41.723660    9080 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 17:38:41.727422    9080 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 17:38:41.727422    9080 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:38:41.792418    9080 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 17:38:41.853351    9080 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b to local cache
	I0415 17:38:41.853351    9080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.43-1713176859-18634@sha256_aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar
	I0415 17:38:41.854358    9080 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.43-1713176859-18634@sha256_aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar
	I0415 17:38:41.854358    9080 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory
	I0415 17:38:41.854358    9080 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b to local cache
	I0415 17:38:46.426249    9080 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:38:46.427253    9080 preload.go:255] verifying checksum of C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:38:47.463606    9080 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 17:38:47.463606    9080 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-211500\config.json ...
	I0415 17:38:47.464610    9080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-211500\config.json: {Name:mk9408e1661313990c2dca7aeb527f900f63f7d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 17:38:47.465608    9080 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 17:38:47.466606    9080 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	I0415 17:38:49.197008    9080 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b as a tarball
	
	
	* The control-plane node download-only-211500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-211500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:38:51.082467   15988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (2.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.4756876s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (2.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-211500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-211500: (1.6745107s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (6.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-394400 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-394400 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker: (6.8740506s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (6.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-394400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-394400: exit status 85 (274.4134ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-211500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |                     |
	|         | -p download-only-211500        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=docker                |                      |                   |                |                     |                     |
	| delete  | --all                          | minikube             | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC | 15 Apr 24 17:38 UTC |
	| delete  | -p download-only-211500        | download-only-211500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC | 15 Apr 24 17:38 UTC |
	| start   | -o=json --download-only        | download-only-394400 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |                     |
	|         | -p download-only-394400        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=docker                |                      |                   |                |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 17:38:56
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 17:38:56.075963   14944 out.go:291] Setting OutFile to fd 752 ...
	I0415 17:38:56.076900   14944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:38:56.076900   14944 out.go:304] Setting ErrFile to fd 756...
	I0415 17:38:56.076900   14944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:38:56.097897   14944 out.go:298] Setting JSON to true
	I0415 17:38:56.104665   14944 start.go:129] hostinfo: {"hostname":"minikube4","uptime":19206,"bootTime":1713183529,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 17:38:56.104665   14944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:38:56.112544   14944 out.go:97] [download-only-394400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:38:56.112777   14944 notify.go:220] Checking for updates...
	I0415 17:38:56.115645   14944 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:38:56.120133   14944 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 17:38:56.125798   14944 out.go:169] MINIKUBE_LOCATION=18634
	I0415 17:38:56.130593   14944 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0415 17:38:56.136679   14944 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 17:38:56.137434   14944 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:38:56.403977   14944 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:38:56.414010   14944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:38:56.765954   14944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:80 SystemTime:2024-04-15 17:38:56.724253355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:38:57.026709   14944 out.go:97] Using the docker driver based on user configuration
	I0415 17:38:57.026709   14944 start.go:297] selected driver: docker
	I0415 17:38:57.026709   14944 start.go:901] validating driver "docker" against <nil>
	I0415 17:38:57.046514   14944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:38:57.376949   14944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:80 SystemTime:2024-04-15 17:38:57.340143267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:38:57.377477   14944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 17:38:57.429057   14944 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0415 17:38:57.430571   14944 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 17:38:57.432829   14944 out.go:169] Using Docker Desktop driver with root privileges
	I0415 17:38:57.435511   14944 cni.go:84] Creating CNI manager for ""
	I0415 17:38:57.436079   14944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:38:57.436128   14944 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 17:38:57.436242   14944 start.go:340] cluster config:
	{Name:download-only-394400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-394400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:38:57.438629   14944 out.go:97] Starting "download-only-394400" primary control-plane node in "download-only-394400" cluster
	I0415 17:38:57.438629   14944 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 17:38:57.441171   14944 out.go:97] Pulling base image v0.0.43-1713176859-18634 ...
	I0415 17:38:57.441171   14944 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:38:57.442204   14944 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 17:38:57.480749   14944 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 17:38:57.480832   14944 cache.go:56] Caching tarball of preloaded images
	I0415 17:38:57.481201   14944 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 17:38:57.484014   14944 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 17:38:57.484014   14944 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:38:57.555425   14944 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2fedab548578a1509c0f422889c3109c -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 17:38:57.608683   14944 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b to local cache
	I0415 17:38:57.608683   14944 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.43-1713176859-18634@sha256_aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar
	I0415 17:38:57.608683   14944 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.43-1713176859-18634@sha256_aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar
	I0415 17:38:57.609219   14944 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory
	I0415 17:38:57.609357   14944 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory, skipping pull
	I0415 17:38:57.609357   14944 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in cache, skipping pull
	I0415 17:38:57.609357   14944 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b as a tarball
	I0415 17:39:00.737768   14944 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:39:00.738293   14944 preload.go:255] verifying checksum of C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-394400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-394400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:39:02.877267    9644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (2.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.3161786s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (2.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-394400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-394400: (1.2321936s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (7.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-003400 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-003400 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker: (7.2252822s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (7.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-003400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-003400: exit status 85 (326.5358ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-211500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |                     |
	|         | -p download-only-211500           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=docker                   |                      |                   |                |                     |                     |
	| delete  | --all                             | minikube             | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC | 15 Apr 24 17:38 UTC |
	| delete  | -p download-only-211500           | download-only-211500 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC | 15 Apr 24 17:38 UTC |
	| start   | -o=json --download-only           | download-only-394400 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:38 UTC |                     |
	|         | -p download-only-394400           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=docker                   |                      |                   |                |                     |                     |
	| delete  | --all                             | minikube             | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| delete  | -p download-only-394400           | download-only-394400 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC | 15 Apr 24 17:39 UTC |
	| start   | -o=json --download-only           | download-only-003400 | minikube4\jenkins | v1.33.0-beta.0 | 15 Apr 24 17:39 UTC |                     |
	|         | -p download-only-003400           |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |                   |                |                     |                     |
	|         | --container-runtime=docker        |                      |                   |                |                     |                     |
	|         | --driver=docker                   |                      |                   |                |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 17:39:06
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 17:39:06.782027    3200 out.go:291] Setting OutFile to fd 776 ...
	I0415 17:39:06.782903    3200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:39:06.782979    3200 out.go:304] Setting ErrFile to fd 676...
	I0415 17:39:06.782979    3200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:39:06.804758    3200 out.go:298] Setting JSON to true
	I0415 17:39:06.807815    3200 start.go:129] hostinfo: {"hostname":"minikube4","uptime":19216,"bootTime":1713183529,"procs":206,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 17:39:06.808815    3200 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:39:06.813239    3200 out.go:97] [download-only-003400] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:39:06.813568    3200 notify.go:220] Checking for updates...
	I0415 17:39:06.815479    3200 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:39:06.817235    3200 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 17:39:06.819857    3200 out.go:169] MINIKUBE_LOCATION=18634
	I0415 17:39:06.821518    3200 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0415 17:39:06.825493    3200 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 17:39:06.826134    3200 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:39:07.080824    3200 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:39:07.090965    3200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:39:07.409469    3200 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:80 SystemTime:2024-04-15 17:39:07.366907465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:39:07.412055    3200 out.go:97] Using the docker driver based on user configuration
	I0415 17:39:07.412055    3200 start.go:297] selected driver: docker
	I0415 17:39:07.412055    3200 start.go:901] validating driver "docker" against <nil>
	I0415 17:39:07.430704    3200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:39:07.752150    3200 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:80 SystemTime:2024-04-15 17:39:07.71133353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:39:07.752691    3200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 17:39:07.801762    3200 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0415 17:39:07.802613    3200 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 17:39:07.922835    3200 out.go:169] Using Docker Desktop driver with root privileges
	I0415 17:39:07.945491    3200 cni.go:84] Creating CNI manager for ""
	I0415 17:39:07.945555    3200 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 17:39:07.945555    3200 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 17:39:07.945555    3200 start.go:340] cluster config:
	{Name:download-only-003400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-003400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterv
al:1m0s}
	I0415 17:39:07.948081    3200 out.go:97] Starting "download-only-003400" primary control-plane node in "download-only-003400" cluster
	I0415 17:39:07.948157    3200 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 17:39:07.950146    3200 out.go:97] Pulling base image v0.0.43-1713176859-18634 ...
	I0415 17:39:07.950223    3200 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 17:39:07.950297    3200 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 17:39:07.989714    3200 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 17:39:07.989714    3200 cache.go:56] Caching tarball of preloaded images
	I0415 17:39:07.989714    3200 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 17:39:08.116483    3200 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b to local cache
	I0415 17:39:08.116483    3200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.43-1713176859-18634@sha256_aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar
	I0415 17:39:08.116483    3200 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.43-1713176859-18634@sha256_aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b.tar
	I0415 17:39:08.117397    3200 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory
	I0415 17:39:08.117397    3200 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory, skipping pull
	I0415 17:39:08.117397    3200 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in cache, skipping pull
	I0415 17:39:08.117397    3200 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b as a tarball
	I0415 17:39:08.223492    3200 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 17:39:08.223492    3200 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:39:08.285782    3200 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:9834337eee074d8b5e25932a2917a549 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 17:39:11.762954    3200 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 17:39:11.764118    3200 preload.go:255] verifying checksum of C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-003400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-003400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:39:13.946076   15112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (2.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.0826102s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (2.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-003400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-003400: (1.2679422s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (1.27s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.66s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-382700 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-382700 --alsologtostderr --driver=docker: (1.4445674s)
helpers_test.go:175: Cleaning up "download-docker-382700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-382700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-382700: (1.310412s)
--- PASS: TestDownloadOnlyKic (3.66s)

                                                
                                    
x
+
TestBinaryMirror (3.39s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-768100 --alsologtostderr --binary-mirror http://127.0.0.1:50489 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-768100 --alsologtostderr --binary-mirror http://127.0.0.1:50489 --driver=docker: (1.7738284s)
helpers_test.go:175: Cleaning up "binary-mirror-768100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-768100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-768100: (1.3433904s)
--- PASS: TestBinaryMirror (3.39s)

                                                
                                    
x
+
TestOffline (165.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-258500 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-258500 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (1m56.4799221s)
helpers_test.go:175: Cleaning up "offline-docker-258500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-258500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-258500: (49.3534583s)
--- PASS: TestOffline (165.83s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.26s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-661400
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-661400: exit status 85 (257.8949ms)

                                                
                                                
-- stdout --
	* Profile "addons-661400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-661400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:39:28.466193    7704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.26s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.25s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-661400
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-661400: exit status 85 (247.3403ms)

                                                
                                                
-- stdout --
	* Profile "addons-661400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-661400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:39:28.465412    7312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.25s)

                                                
                                    
x
+
TestAddons/Setup (525.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-661400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-661400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (8m45.2914271s)
--- PASS: TestAddons/Setup (525.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (15.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lz6hl" [20c92d10-be05-42c1-85a9-8647e945e516] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0675722s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-661400
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-661400: (9.0429148s)
--- PASS: TestAddons/parallel/InspektorGadget (15.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 30.9496ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-r68r9" [9d2792b2-0d88-444a-86a8-ce8701cb75ae] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0208467s
addons_test.go:415: (dbg) Run:  kubectl --context addons-661400 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-661400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-661400 addons disable metrics-server --alsologtostderr -v=1: (2.7591507s)
--- PASS: TestAddons/parallel/MetricsServer (8.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (32.52s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 30.9496ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-ttlwz" [2cbaf5bf-948b-44c4-9ca2-c2d1b4358adf] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0206786s
addons_test.go:473: (dbg) Run:  kubectl --context addons-661400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-661400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (25.4471925s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-661400 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-661400 addons disable helm-tiller --alsologtostderr -v=1: (2.0017604s)
--- PASS: TestAddons/parallel/HelmTiller (32.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (85.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 34.9253ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-661400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-661400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [594d066e-3660-4d06-a410-461269976daa] Pending
helpers_test.go:344: "task-pv-pod" [594d066e-3660-4d06-a410-461269976daa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [594d066e-3660-4d06-a410-461269976daa] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 28.0334796s
addons_test.go:584: (dbg) Run:  kubectl --context addons-661400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-661400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-661400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-661400 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-661400 delete pod task-pv-pod: (2.0478425s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-661400 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-661400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-661400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [703e710f-c544-4dab-a6b1-ffd618cd4b1a] Pending
helpers_test.go:344: "task-pv-pod-restore" [703e710f-c544-4dab-a6b1-ffd618cd4b1a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [703e710f-c544-4dab-a6b1-ffd618cd4b1a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 20.0205763s
addons_test.go:626: (dbg) Run:  kubectl --context addons-661400 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-661400 delete pod task-pv-pod-restore: (1.1475417s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-661400 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-661400 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-661400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-661400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.4089522s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-661400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-661400 addons disable volumesnapshots --alsologtostderr -v=1: (2.3078598s)
--- PASS: TestAddons/parallel/CSI (85.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-661400 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-661400 --alsologtostderr -v=1: (2.283859s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-z6q2z" [a6767215-1f80-450c-81da-b92a23ddcffa] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-z6q2z" [a6767215-1f80-450c-81da-b92a23ddcffa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-z6q2z" [a6767215-1f80-450c-81da-b92a23ddcffa] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.0799284s
--- PASS: TestAddons/parallel/Headlamp (25.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-4mzn5" [26e76efb-d0d1-468c-823b-8d12dc0c526d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0137236s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-661400
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-661400: (2.2072215s)
--- PASS: TestAddons/parallel/CloudSpanner (7.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (33.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-661400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-661400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Done: kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default: (1.0852164s)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e2d689e6-5206-4b3d-8e5a-146646f006d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e2d689e6-5206-4b3d-8e5a-146646f006d1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e2d689e6-5206-4b3d-8e5a-146646f006d1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0144268s
addons_test.go:891: (dbg) Run:  kubectl --context addons-661400 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-661400 ssh "cat /opt/local-path-provisioner/pvc-687d6e39-42ef-4c30-99fb-24c33c871233_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-661400 ssh "cat /opt/local-path-provisioner/pvc-687d6e39-42ef-4c30-99fb-24c33c871233_default_test-pvc/file1": (1.1266631s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-661400 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-661400 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-661400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-661400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1.0950822s)
--- PASS: TestAddons/parallel/LocalPath (33.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.83s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sl929" [50196939-9e4f-479c-8c00-929c8df3b864] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0222814s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-661400
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-661400: (1.8023705s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.83s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-q5dts" [7d178e0e-520f-4ca3-82c7-98164e9ff4d9] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0626938s
--- PASS: TestAddons/parallel/Yakd (5.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.38s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-661400 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-661400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.38s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-661400
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-661400: (12.6771348s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-661400
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-661400
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-661400
--- PASS: TestAddons/StoppedEnableDisable (14.32s)

                                                
                                    
x
+
TestCertOptions (99.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-410800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-410800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m30.6053576s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-410800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-410800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.3442858s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-410800 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-410800 -- "sudo cat /etc/kubernetes/admin.conf": (1.4633255s)
helpers_test.go:175: Cleaning up "cert-options-410800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-410800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-410800: (6.0827921s)
--- PASS: TestCertOptions (99.72s)

                                                
                                    
x
+
TestCertExpiration (307.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-262100 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-262100 --memory=2048 --cert-expiration=3m --driver=docker: (1m22.9469766s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-262100 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-262100 --memory=2048 --cert-expiration=8760h --driver=docker: (38.2866178s)
helpers_test.go:175: Cleaning up "cert-expiration-262100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-262100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-262100: (6.4520895s)
--- PASS: TestCertExpiration (307.69s)

                                                
                                    
x
+
TestDockerFlags (89.02s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-646100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-646100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m20.6375738s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-646100 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-646100 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.4347818s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-646100 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-646100 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.3436739s)
helpers_test.go:175: Cleaning up "docker-flags-646100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-646100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-646100: (5.6023168s)
--- PASS: TestDockerFlags (89.02s)

                                                
                                    
x
+
TestForceSystemdFlag (84.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-930300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-930300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m16.743917s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-930300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-930300 ssh "docker info --format {{.CgroupDriver}}": (1.355446s)
helpers_test.go:175: Cleaning up "force-systemd-flag-930300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-930300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-930300: (6.225012s)
--- PASS: TestForceSystemdFlag (84.32s)

                                                
                                    
x
+
TestForceSystemdEnv (92.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-712800 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-712800 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m24.8638063s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-712800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-712800 ssh "docker info --format {{.CgroupDriver}}": (1.562399s)
helpers_test.go:175: Cleaning up "force-systemd-env-712800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-712800
E0415 18:55:37.080821   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-712800: (6.3549674s)
--- PASS: TestForceSystemdEnv (92.78s)

                                                
                                    
x
+
TestErrorSpam/start (3.89s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 start --dry-run: (1.3074053s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 start --dry-run: (1.300868s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 start --dry-run: (1.275386s)
--- PASS: TestErrorSpam/start (3.89s)

                                                
                                    
x
+
TestErrorSpam/status (3.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 status: (1.2371991s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 status: (1.3032865s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 status: (1.2685235s)
--- PASS: TestErrorSpam/status (3.81s)

                                                
                                    
x
+
TestErrorSpam/pause (3.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 pause: (1.5892199s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 pause: (1.1856577s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 pause: (1.1586125s)
--- PASS: TestErrorSpam/pause (3.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 unpause: (1.6960175s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 unpause: (1.3273754s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 unpause: (1.8131409s)
--- PASS: TestErrorSpam/unpause (4.84s)

                                                
                                    
x
+
TestErrorSpam/stop (20.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 stop: (11.787899s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 stop: (4.2815321s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-452000 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-452000 stop: (4.5421365s)
--- PASS: TestErrorSpam/stop (20.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11748\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-662500 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0415 17:53:13.953583   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:13.968551   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:13.983988   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:14.014503   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:14.061478   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:14.153998   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:14.327990   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:14.652970   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:15.296039   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:16.580329   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:19.142797   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:24.264809   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-662500 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m19.0222515s)
--- PASS: TestFunctional/serial/StartWithProxy (79.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-662500 --alsologtostderr -v=8
E0415 17:53:34.509257   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:53:54.992904   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-662500 --alsologtostderr -v=8: (43.1922811s)
functional_test.go:659: soft start took 43.1939944s for "functional-662500" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-662500 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 cache add registry.k8s.io/pause:3.1: (2.3153641s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 cache add registry.k8s.io/pause:3.3: (2.1414214s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 cache add registry.k8s.io/pause:latest: (2.1704588s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-662500 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3646459443\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-662500 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3646459443\001: (2.1039792s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cache add minikube-local-cache-test:functional-662500
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 cache add minikube-local-cache-test:functional-662500: (1.6517192s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cache delete minikube-local-cache-test:functional-662500
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-662500
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh sudo crictl images: (1.1647714s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh sudo docker rmi registry.k8s.io/pause:latest: (1.1367571s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-662500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1.1547475s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:54:24.862929    8772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 cache reload: (1.757116s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (1.1433621s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (5.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 kubectl -- --context functional-662500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-662500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-662500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.1910555s)
functional_test.go:757: restart took 49.1919651s for "functional-662500" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-662500 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 logs: (2.6589228s)
--- PASS: TestFunctional/serial/LogsCmd (2.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2555080645\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2555080645\001\logs.txt: (2.7821022s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (6.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-662500 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-662500
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-662500: exit status 115 (1.5528645s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32150 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:55:34.435293    9620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-662500 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-662500 delete -f testdata\invalidsvc.yaml: (1.0055954s)
--- PASS: TestFunctional/serial/InvalidService (6.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-662500 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-662500 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.4516976s)

                                                
                                                
-- stdout --
	* [functional-662500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:56:45.591251   11884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 17:56:45.704864   11884 out.go:291] Setting OutFile to fd 808 ...
	I0415 17:56:45.705859   11884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:56:45.705859   11884 out.go:304] Setting ErrFile to fd 788...
	I0415 17:56:45.705859   11884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:56:45.740685   11884 out.go:298] Setting JSON to false
	I0415 17:56:45.747693   11884 start.go:129] hostinfo: {"hostname":"minikube4","uptime":20275,"bootTime":1713183529,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 17:56:45.747693   11884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:56:45.758741   11884 out.go:177] * [functional-662500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:56:45.761707   11884 notify.go:220] Checking for updates...
	I0415 17:56:45.763726   11884 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:56:45.766701   11884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 17:56:45.768695   11884 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 17:56:45.771699   11884 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 17:56:45.774699   11884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 17:56:45.777690   11884 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:56:45.779698   11884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:56:46.196696   11884 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:56:46.215691   11884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:56:46.719078   11884 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:87 SystemTime:2024-04-15 17:56:46.665413294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:56:46.722004   11884 out.go:177] * Using the docker driver based on existing profile
	I0415 17:56:46.726053   11884 start.go:297] selected driver: docker
	I0415 17:56:46.726053   11884 start.go:901] validating driver "docker" against &{Name:functional-662500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-662500 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:56:46.726053   11884 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 17:56:46.823036   11884 out.go:177] 
	W0415 17:56:46.826017   11884 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 17:56:46.828017   11884 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-662500 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:987: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-662500 --dry-run --alsologtostderr -v=1 --driver=docker: (1.7646613s)
--- PASS: TestFunctional/parallel/DryRun (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-662500 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-662500 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.3681176s)

                                                
                                                
-- stdout --
	* [functional-662500] minikube v1.33.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:56:47.087028    2552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 17:56:47.207039    2552 out.go:291] Setting OutFile to fd 700 ...
	I0415 17:56:47.207039    2552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:56:47.207039    2552 out.go:304] Setting ErrFile to fd 628...
	I0415 17:56:47.207039    2552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 17:56:47.249031    2552 out.go:298] Setting JSON to false
	I0415 17:56:47.253036    2552 start.go:129] hostinfo: {"hostname":"minikube4","uptime":20277,"bootTime":1713183529,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0415 17:56:47.253036    2552 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 17:56:47.259034    2552 out.go:177] * [functional-662500] minikube v1.33.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0415 17:56:47.262037    2552 notify.go:220] Checking for updates...
	I0415 17:56:47.264027    2552 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0415 17:56:47.268050    2552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 17:56:47.270033    2552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0415 17:56:47.272023    2552 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 17:56:47.276084    2552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 17:56:47.280082    2552 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 17:56:47.282051    2552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 17:56:47.683232    2552 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 17:56:47.704758    2552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 17:56:48.172282    2552 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:87 SystemTime:2024-04-15 17:56:48.118838518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 17:56:48.175294    2552 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0415 17:56:48.177286    2552 start.go:297] selected driver: docker
	I0415 17:56:48.178313    2552 start.go:901] validating driver "docker" against &{Name:functional-662500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-662500 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 17:56:48.178313    2552 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 17:56:48.246270    2552 out.go:177] 
	W0415 17:56:48.248268    2552 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 17:56:48.250269    2552 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (5.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 status: (1.562078s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.7044514s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 status -o json: (2.0822187s)
--- PASS: TestFunctional/parallel/StatusCmd (5.35s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (61.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c2731567-18b5-4c30-9bfe-257c96aa88e9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0150837s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-662500 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-662500 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-662500 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-662500 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-662500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ee6ab619-2ddc-43b8-91a4-30f9e2776cb3] Pending
helpers_test.go:344: "sp-pod" [ee6ab619-2ddc-43b8-91a4-30f9e2776cb3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ee6ab619-2ddc-43b8-91a4-30f9e2776cb3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 39.0214507s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-662500 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-662500 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-662500 delete -f testdata/storage-provisioner/pod.yaml: (1.666334s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-662500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d4396c5c-3baf-4f9a-ac28-ac1cb62d8e1f] Pending
helpers_test.go:344: "sp-pod" [d4396c5c-3baf-4f9a-ac28-ac1cb62d8e1f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d4396c5c-3baf-4f9a-ac28-ac1cb62d8e1f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0083517s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-662500 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (61.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "echo hello": (1.2069194s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "cat /etc/hostname": (1.4740188s)
--- PASS: TestFunctional/parallel/SSHCmd (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (7.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh -n functional-662500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh -n functional-662500 "sudo cat /home/docker/cp-test.txt": (1.1388345s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cp functional-662500:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd733256556\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 cp functional-662500:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd733256556\001\cp-test.txt: (1.4298302s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh -n functional-662500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh -n functional-662500 "sudo cat /home/docker/cp-test.txt": (1.4886529s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (1.2053022s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh -n functional-662500 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh -n functional-662500 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.3743498s)
--- PASS: TestFunctional/parallel/CpCmd (7.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (74.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-662500 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-p99fz" [55dee6fb-07a6-4709-a1a0-dde1e82fa058] Pending
helpers_test.go:344: "mysql-859648c796-p99fz" [55dee6fb-07a6-4709-a1a0-dde1e82fa058] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-p99fz" [55dee6fb-07a6-4709-a1a0-dde1e82fa058] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m1.0209566s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;": exit status 1 (287.7047ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;": exit status 1 (292.3377ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;": exit status 1 (302.9555ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;": exit status 1 (352.863ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;": exit status 1 (279.9825ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-662500 exec mysql-859648c796-p99fz -- mysql -ppassword -e "show databases;"
E0415 17:58:13.981133   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 17:58:41.741825   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (74.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11748/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/test/nested/copy/11748/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/test/nested/copy/11748/hosts": (1.1623233s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11748.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/ssl/certs/11748.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/ssl/certs/11748.pem": (1.6688039s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11748.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /usr/share/ca-certificates/11748.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /usr/share/ca-certificates/11748.pem": (1.1593481s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.514642s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/117482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/ssl/certs/117482.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/ssl/certs/117482.pem": (1.1116749s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/117482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /usr/share/ca-certificates/117482.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /usr/share/ca-certificates/117482.pem": (1.177677s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.1954349s)
--- PASS: TestFunctional/parallel/CertSync (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-662500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-662500 ssh "sudo systemctl is-active crio": exit status 1 (1.2221748s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:55:48.507064   15276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.1012808s)
--- PASS: TestFunctional/parallel/License (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-662500 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-662500 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-wzv7d" [a268a206-8e2b-433a-b975-ba586808686a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-wzv7d" [a268a206-8e2b-433a-b975-ba586808686a] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.0173456s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-662500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-662500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-662500 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-662500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1840: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 13308: OpenProcess: The parameter is incorrect.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-662500 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (21.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-662500 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [26f8bda0-0b1d-4ef8-85e2-821dee3b7650] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [26f8bda0-0b1d-4ef8-85e2-821dee3b7650] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 21.0767001s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (21.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 version -o=json --components: (2.4978287s)
--- PASS: TestFunctional/parallel/Version/components (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-662500 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-662500
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-662500
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-662500 image ls --format short --alsologtostderr:
W0415 17:56:52.015526    1564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 17:56:52.111436    1564 out.go:291] Setting OutFile to fd 912 ...
I0415 17:56:52.112160    1564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:52.112160    1564 out.go:304] Setting ErrFile to fd 820...
I0415 17:56:52.112251    1564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:52.129048    1564 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:52.129048    1564 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:52.153037    1564 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
I0415 17:56:52.343064    1564 ssh_runner.go:195] Run: systemctl --version
I0415 17:56:52.352056    1564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
I0415 17:56:52.516422    1564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
I0415 17:56:52.644499    1564 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-662500 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.29.3           | 39f995c9f1996 | 127MB  |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| docker.io/library/nginx                     | latest            | c613f16b66424 | 187MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-662500 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 8c390d98f50c0 | 59.6MB |
| registry.k8s.io/kube-proxy                  | v1.29.3           | a1d263b5dc5b0 | 82.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-662500 | 3c525046afdf4 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 6052a25da3f97 | 122MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-662500 image ls --format table --alsologtostderr:
W0415 17:56:52.854850   13752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 17:56:52.954317   13752 out.go:291] Setting OutFile to fd 956 ...
I0415 17:56:52.955306   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:52.955306   13752 out.go:304] Setting ErrFile to fd 780...
I0415 17:56:52.955306   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:52.970302   13752 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:52.971365   13752 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:52.997316   13752 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
I0415 17:56:53.203327   13752 ssh_runner.go:195] Run: systemctl --version
I0415 17:56:53.212332   13752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
I0415 17:56:53.383332   13752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
I0415 17:56:53.548628   13752 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-662500 image ls --format json --alsologtostderr:
[{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"127000000"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"82400000"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","
repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"122000000"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59600000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-662500"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/cor
edns/coredns:v1.11.1"],"size":"59800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"3c525046afdf4cd091bca782f4d20fb279c0976c6eae9d73b7ea3c989b1b9f94","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-662500"],"size":"30"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-662500 image ls --format json --alsologtostderr:
W0415 17:56:52.219040    2956 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 17:56:52.306049    2956 out.go:291] Setting OutFile to fd 936 ...
I0415 17:56:52.317048    2956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:52.317048    2956 out.go:304] Setting ErrFile to fd 836...
I0415 17:56:52.317048    2956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:52.343064    2956 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:52.343064    2956 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:52.363069    2956 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
I0415 17:56:52.548498    2956 ssh_runner.go:195] Run: systemctl --version
I0415 17:56:52.557498    2956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
I0415 17:56:52.735512    2956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
I0415 17:56:52.986306    2956 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-662500 image ls --format yaml --alsologtostderr:
- id: 3c525046afdf4cd091bca782f4d20fb279c0976c6eae9d73b7ea3c989b1b9f94
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-662500
size: "30"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59600000"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "82400000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-662500
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "127000000"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "122000000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-662500 image ls --format yaml --alsologtostderr:
W0415 17:56:53.755410    7884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 17:56:53.838987    7884 out.go:291] Setting OutFile to fd 956 ...
I0415 17:56:53.838987    7884 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:53.838987    7884 out.go:304] Setting ErrFile to fd 780...
I0415 17:56:53.838987    7884 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:53.858994    7884 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:53.858994    7884 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:53.882640    7884 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
I0415 17:56:54.083181    7884 ssh_runner.go:195] Run: systemctl --version
I0415 17:56:54.099178    7884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
I0415 17:56:54.266430    7884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
I0415 17:56:54.402077    7884 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-662500 ssh pgrep buildkitd: exit status 1 (1.1690563s)

                                                
                                                
** stderr ** 
	W0415 17:56:53.182326    7476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image build -t localhost/my-image:functional-662500 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image build -t localhost/my-image:functional-662500 testdata\build --alsologtostderr: (7.3277752s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-662500 image build -t localhost/my-image:functional-662500 testdata\build --alsologtostderr:
W0415 17:56:54.347686    7644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0415 17:56:54.447520    7644 out.go:291] Setting OutFile to fd 956 ...
I0415 17:56:54.466543    7644 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:54.466610    7644 out.go:304] Setting ErrFile to fd 780...
I0415 17:56:54.466677    7644 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 17:56:54.483338    7644 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:54.502571    7644 config.go:182] Loaded profile config "functional-662500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 17:56:54.526129    7644 cli_runner.go:164] Run: docker container inspect functional-662500 --format={{.State.Status}}
I0415 17:56:54.712520    7644 ssh_runner.go:195] Run: systemctl --version
I0415 17:56:54.722523    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-662500
I0415 17:56:54.894037    7644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51311 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-662500\id_rsa Username:docker}
I0415 17:56:55.006051    7644 build_images.go:161] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1082654347.tar
I0415 17:56:55.021355    7644 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 17:56:55.059432    7644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1082654347.tar
I0415 17:56:55.081124    7644 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1082654347.tar: stat -c "%s %y" /var/lib/minikube/build/build.1082654347.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1082654347.tar': No such file or directory
I0415 17:56:55.081890    7644 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1082654347.tar --> /var/lib/minikube/build/build.1082654347.tar (3072 bytes)
I0415 17:56:55.140277    7644 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1082654347
I0415 17:56:55.211514    7644 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1082654347 -xf /var/lib/minikube/build/build.1082654347.tar
I0415 17:56:55.233578    7644 docker.go:360] Building image: /var/lib/minikube/build/build.1082654347
I0415 17:56:55.242577    7644 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-662500 /var/lib/minikube/build/build.1082654347
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 3.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:ee2c7e9476230b1b9f336987d402d593521e5018d7b47eee4e2f903a406a75d0 done
#8 naming to localhost/my-image:functional-662500 0.0s done
#8 DONE 0.2s
I0415 17:57:01.466367    7644 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-662500 /var/lib/minikube/build/build.1082654347: (6.2234968s)
I0415 17:57:01.481673    7644 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1082654347
I0415 17:57:01.510791    7644 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1082654347.tar
I0415 17:57:01.529733    7644 build_images.go:217] Built localhost/my-image:functional-662500 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1082654347.tar
I0415 17:57:01.529733    7644 build_images.go:133] succeeded building to: functional-662500
I0415 17:57:01.529733    7644 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.1220661s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-662500
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (13.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image load --daemon gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image load --daemon gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr: (12.4666233s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image ls: (1.2015641s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (13.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 service list
E0415 17:55:57.889156   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 service list: (1.5879466s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 service list -o json: (1.4298556s)
functional_test.go:1490: Took "1.42995s" to run "out/minikube-windows-amd64.exe -p functional-662500 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-662500 service --namespace=default --https --url hello-node: exit status 1 (15.0181028s)

                                                
                                                
-- stdout --
	https://127.0.0.1:51571

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:56:00.526770   12468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:51571
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-662500 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-662500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7616: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 6124: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image load --daemon gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image load --daemon gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr: (5.1055353s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.7015066s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-662500
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image load --daemon gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image load --daemon gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr: (10.220667s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-662500 service hello-node --url --format={{.IP}}: exit status 1 (15.0297352s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:56:15.498652   10084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-662500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-662500"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-662500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-662500": (5.2986681s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-662500 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-662500 docker-env | Invoke-Expression ; docker images": (3.8297616s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image save gcr.io/google-containers/addon-resizer:functional-662500 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image save gcr.io/google-containers/addon-resizer:functional-662500 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (3.7956639s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-662500 service hello-node --url: exit status 1 (15.0188561s)

                                                
                                                
-- stdout --
	http://127.0.0.1:51621

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 17:56:30.525257   16036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:51621
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image rm gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (4.9409485s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image ls: (1.0875297s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-662500
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-662500 image save --daemon gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-662500 image save --daemon gcr.io/google-containers/addon-resizer:functional-662500 --alsologtostderr: (6.1531162s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-662500
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9810286s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.3444346s)
functional_test.go:1311: Took "2.3444346s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "299.9874ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.6920465s)
functional_test.go:1362: Took "1.6928617s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "247.5126ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.94s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.43s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-662500
--- PASS: TestFunctional/delete_addon-resizer_images (0.43s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-662500
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-662500
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (229.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-589200 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
E0415 18:03:13.985897   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 18:05:36.942720   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:36.957675   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:36.973065   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:37.004283   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:37.051497   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:37.144781   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:37.316891   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:37.643709   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:38.299248   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:05:39.593440   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-589200 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (3m45.8765139s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr
E0415 18:05:42.154743   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr: (3.4144099s)
--- PASS: TestMultiControlPlane/serial/StartCluster (229.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (14.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- rollout status deployment/busybox
E0415 18:05:47.276231   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-589200 -- rollout status deployment/busybox: (3.9471596s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-nsdk8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-nsdk8 -- nslookup kubernetes.io: (2.0813324s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-thjwf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-thjwf -- nslookup kubernetes.io: (1.5985547s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-zgnpp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-zgnpp -- nslookup kubernetes.io: (1.6292929s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-nsdk8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-thjwf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-zgnpp -- nslookup kubernetes.default
E0415 18:05:57.529914   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-nsdk8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-thjwf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-zgnpp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (14.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-nsdk8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-nsdk8 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-thjwf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-thjwf -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-zgnpp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-589200 -- exec busybox-7fdf7869d9-zgnpp -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-589200 -v=7 --alsologtostderr
E0415 18:06:18.015079   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-589200 -v=7 --alsologtostderr: (54.1054518s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr
E0415 18:06:58.984546   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr: (4.3529411s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-589200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (3.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.3464858s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (3.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (70.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 status --output json -v=7 --alsologtostderr: (4.193761s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp testdata\cp-test.txt ha-589200:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp testdata\cp-test.txt ha-589200:/home/docker/cp-test.txt: (1.1304445s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt": (1.2667466s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3411908143\001\cp-test_ha-589200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3411908143\001\cp-test_ha-589200.txt: (1.1233916s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt": (1.1336384s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200:/home/docker/cp-test.txt ha-589200-m02:/home/docker/cp-test_ha-589200_ha-589200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200:/home/docker/cp-test.txt ha-589200-m02:/home/docker/cp-test_ha-589200_ha-589200-m02.txt: (1.6493419s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt": (1.1612774s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test_ha-589200_ha-589200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test_ha-589200_ha-589200-m02.txt": (1.1152645s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200:/home/docker/cp-test.txt ha-589200-m03:/home/docker/cp-test_ha-589200_ha-589200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200:/home/docker/cp-test.txt ha-589200-m03:/home/docker/cp-test_ha-589200_ha-589200-m03.txt: (1.6407334s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt": (1.1059607s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test_ha-589200_ha-589200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test_ha-589200_ha-589200-m03.txt": (1.1521502s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200:/home/docker/cp-test.txt ha-589200-m04:/home/docker/cp-test_ha-589200_ha-589200-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200:/home/docker/cp-test.txt ha-589200-m04:/home/docker/cp-test_ha-589200_ha-589200-m04.txt: (1.7096288s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test.txt": (1.3071076s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test_ha-589200_ha-589200-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test_ha-589200_ha-589200-m04.txt": (1.1270191s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp testdata\cp-test.txt ha-589200-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp testdata\cp-test.txt ha-589200-m02:/home/docker/cp-test.txt: (1.2026241s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt": (1.1724484s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3411908143\001\cp-test_ha-589200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3411908143\001\cp-test_ha-589200-m02.txt: (1.1222664s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt": (1.1520327s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m02:/home/docker/cp-test.txt ha-589200:/home/docker/cp-test_ha-589200-m02_ha-589200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m02:/home/docker/cp-test.txt ha-589200:/home/docker/cp-test_ha-589200-m02_ha-589200.txt: (1.7013677s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt": (1.1530478s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test_ha-589200-m02_ha-589200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test_ha-589200-m02_ha-589200.txt": (1.1748584s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m02:/home/docker/cp-test.txt ha-589200-m03:/home/docker/cp-test_ha-589200-m02_ha-589200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m02:/home/docker/cp-test.txt ha-589200-m03:/home/docker/cp-test_ha-589200-m02_ha-589200-m03.txt: (1.6551627s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt": (1.1855663s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test_ha-589200-m02_ha-589200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test_ha-589200-m02_ha-589200-m03.txt": (1.1237056s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m02:/home/docker/cp-test.txt ha-589200-m04:/home/docker/cp-test_ha-589200-m02_ha-589200-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m02:/home/docker/cp-test.txt ha-589200-m04:/home/docker/cp-test_ha-589200-m02_ha-589200-m04.txt: (1.6927551s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test.txt": (1.1634492s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test_ha-589200-m02_ha-589200-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test_ha-589200-m02_ha-589200-m04.txt": (1.1009156s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp testdata\cp-test.txt ha-589200-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp testdata\cp-test.txt ha-589200-m03:/home/docker/cp-test.txt: (1.176942s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt": (1.1173208s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3411908143\001\cp-test_ha-589200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3411908143\001\cp-test_ha-589200-m03.txt: (1.1230652s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt": (1.1286521s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m03:/home/docker/cp-test.txt ha-589200:/home/docker/cp-test_ha-589200-m03_ha-589200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m03:/home/docker/cp-test.txt ha-589200:/home/docker/cp-test_ha-589200-m03_ha-589200.txt: (1.6258488s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt": (1.1173284s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test_ha-589200-m03_ha-589200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test_ha-589200-m03_ha-589200.txt": (1.1228816s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m03:/home/docker/cp-test.txt ha-589200-m02:/home/docker/cp-test_ha-589200-m03_ha-589200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m03:/home/docker/cp-test.txt ha-589200-m02:/home/docker/cp-test_ha-589200-m03_ha-589200-m02.txt: (1.6518946s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt": (1.1395078s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test_ha-589200-m03_ha-589200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test_ha-589200-m03_ha-589200-m02.txt": (1.1277333s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m03:/home/docker/cp-test.txt ha-589200-m04:/home/docker/cp-test_ha-589200-m03_ha-589200-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m03:/home/docker/cp-test.txt ha-589200-m04:/home/docker/cp-test_ha-589200-m03_ha-589200-m04.txt: (1.6650312s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test.txt": (1.1403432s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test_ha-589200-m03_ha-589200-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test_ha-589200-m03_ha-589200-m04.txt": (1.149027s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp testdata\cp-test.txt ha-589200-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp testdata\cp-test.txt ha-589200-m04:/home/docker/cp-test.txt: (1.1741758s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt": (1.1267526s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3411908143\001\cp-test_ha-589200-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3411908143\001\cp-test_ha-589200-m04.txt: (1.1189572s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt": (1.1830416s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m04:/home/docker/cp-test.txt ha-589200:/home/docker/cp-test_ha-589200-m04_ha-589200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m04:/home/docker/cp-test.txt ha-589200:/home/docker/cp-test_ha-589200-m04_ha-589200.txt: (1.6098523s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt": (1.1773389s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test_ha-589200-m04_ha-589200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200 "sudo cat /home/docker/cp-test_ha-589200-m04_ha-589200.txt": (1.1287919s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m04:/home/docker/cp-test.txt ha-589200-m02:/home/docker/cp-test_ha-589200-m04_ha-589200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m04:/home/docker/cp-test.txt ha-589200-m02:/home/docker/cp-test_ha-589200-m04_ha-589200-m02.txt: (1.6617425s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt": (1.0894666s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test_ha-589200-m04_ha-589200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m02 "sudo cat /home/docker/cp-test_ha-589200-m04_ha-589200-m02.txt": (1.1236484s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m04:/home/docker/cp-test.txt ha-589200-m03:/home/docker/cp-test_ha-589200-m04_ha-589200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 cp ha-589200-m04:/home/docker/cp-test.txt ha-589200-m03:/home/docker/cp-test_ha-589200-m04_ha-589200-m03.txt: (1.6739427s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m04 "sudo cat /home/docker/cp-test.txt": (1.1542204s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test_ha-589200-m04_ha-589200-m03.txt"
E0415 18:08:13.999872   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 ssh -n ha-589200-m03 "sudo cat /home/docker/cp-test_ha-589200-m04_ha-589200-m03.txt": (1.133872s)
--- PASS: TestMultiControlPlane/serial/CopyFile (70.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (15.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 node stop m02 -v=7 --alsologtostderr
E0415 18:08:20.924992   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 node stop m02 -v=7 --alsologtostderr: (11.9764941s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr: exit status 7 (3.2389439s)

                                                
                                                
-- stdout --
	ha-589200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-589200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-589200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-589200-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:08:27.181996    1136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:08:27.270550    1136 out.go:291] Setting OutFile to fd 768 ...
	I0415 18:08:27.271202    1136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:08:27.271241    1136 out.go:304] Setting ErrFile to fd 752...
	I0415 18:08:27.271241    1136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:08:27.285521    1136 out.go:298] Setting JSON to false
	I0415 18:08:27.285521    1136 mustload.go:65] Loading cluster: ha-589200
	I0415 18:08:27.285692    1136 notify.go:220] Checking for updates...
	I0415 18:08:27.286293    1136 config.go:182] Loaded profile config "ha-589200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:08:27.286293    1136 status.go:255] checking status of ha-589200 ...
	I0415 18:08:27.307285    1136 cli_runner.go:164] Run: docker container inspect ha-589200 --format={{.State.Status}}
	I0415 18:08:27.489398    1136 status.go:330] ha-589200 host status = "Running" (err=<nil>)
	I0415 18:08:27.489398    1136 host.go:66] Checking if "ha-589200" exists ...
	I0415 18:08:27.497915    1136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-589200
	I0415 18:08:27.658448    1136 host.go:66] Checking if "ha-589200" exists ...
	I0415 18:08:27.672429    1136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:08:27.680251    1136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-589200
	I0415 18:08:27.859688    1136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51739 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-589200\id_rsa Username:docker}
	I0415 18:08:28.005999    1136 ssh_runner.go:195] Run: systemctl --version
	I0415 18:08:28.031602    1136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:08:28.062964    1136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-589200
	I0415 18:08:28.227716    1136 kubeconfig.go:125] found "ha-589200" server: "https://127.0.0.1:51743"
	I0415 18:08:28.227716    1136 api_server.go:166] Checking apiserver status ...
	I0415 18:08:28.238470    1136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:08:28.272432    1136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2395/cgroup
	I0415 18:08:28.289429    1136 api_server.go:182] apiserver freezer: "7:freezer:/docker/173b9f29fde2b1284176aa925684fbf5f47da49658443b3ff5831807d07e301f/kubepods/burstable/pod5019720ac806bf2d3652898ca989b0b0/42deb4ad78cb88f1b3f29f34aa07296099f4342796cb281db9b504baf83467be"
	I0415 18:08:28.304852    1136 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/173b9f29fde2b1284176aa925684fbf5f47da49658443b3ff5831807d07e301f/kubepods/burstable/pod5019720ac806bf2d3652898ca989b0b0/42deb4ad78cb88f1b3f29f34aa07296099f4342796cb281db9b504baf83467be/freezer.state
	I0415 18:08:28.323262    1136 api_server.go:204] freezer state: "THAWED"
	I0415 18:08:28.323406    1136 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51743/healthz ...
	I0415 18:08:28.336989    1136 api_server.go:279] https://127.0.0.1:51743/healthz returned 200:
	ok
	I0415 18:08:28.337846    1136 status.go:422] ha-589200 apiserver status = Running (err=<nil>)
	I0415 18:08:28.337846    1136 status.go:257] ha-589200 status: &{Name:ha-589200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:08:28.337846    1136 status.go:255] checking status of ha-589200-m02 ...
	I0415 18:08:28.360912    1136 cli_runner.go:164] Run: docker container inspect ha-589200-m02 --format={{.State.Status}}
	I0415 18:08:28.537000    1136 status.go:330] ha-589200-m02 host status = "Stopped" (err=<nil>)
	I0415 18:08:28.537048    1136 status.go:343] host is not running, skipping remaining checks
	I0415 18:08:28.537132    1136 status.go:257] ha-589200-m02 status: &{Name:ha-589200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:08:28.537132    1136 status.go:255] checking status of ha-589200-m03 ...
	I0415 18:08:28.560204    1136 cli_runner.go:164] Run: docker container inspect ha-589200-m03 --format={{.State.Status}}
	I0415 18:08:28.721946    1136 status.go:330] ha-589200-m03 host status = "Running" (err=<nil>)
	I0415 18:08:28.721946    1136 host.go:66] Checking if "ha-589200-m03" exists ...
	I0415 18:08:28.730961    1136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-589200-m03
	I0415 18:08:28.895604    1136 host.go:66] Checking if "ha-589200-m03" exists ...
	I0415 18:08:28.916180    1136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:08:28.930185    1136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-589200-m03
	I0415 18:08:29.086774    1136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51860 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-589200-m03\id_rsa Username:docker}
	I0415 18:08:29.216438    1136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:08:29.248406    1136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-589200
	I0415 18:08:29.413026    1136 kubeconfig.go:125] found "ha-589200" server: "https://127.0.0.1:51743"
	I0415 18:08:29.413026    1136 api_server.go:166] Checking apiserver status ...
	I0415 18:08:29.427767    1136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:08:29.466191    1136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2227/cgroup
	I0415 18:08:29.484926    1136 api_server.go:182] apiserver freezer: "7:freezer:/docker/20a518feb510002ba895c6d1d4519cdba29d3bacac207bf6ba196f796ef85ef9/kubepods/burstable/pod845ca0855f7810a7231735fdd0ef2155/d811e084a29918e7c4aa5d542c824689854eb09fb24cb3bcb4379f609d0dceb3"
	I0415 18:08:29.496975    1136 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/20a518feb510002ba895c6d1d4519cdba29d3bacac207bf6ba196f796ef85ef9/kubepods/burstable/pod845ca0855f7810a7231735fdd0ef2155/d811e084a29918e7c4aa5d542c824689854eb09fb24cb3bcb4379f609d0dceb3/freezer.state
	I0415 18:08:29.516213    1136 api_server.go:204] freezer state: "THAWED"
	I0415 18:08:29.516213    1136 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51743/healthz ...
	I0415 18:08:29.530798    1136 api_server.go:279] https://127.0.0.1:51743/healthz returned 200:
	ok
	I0415 18:08:29.530798    1136 status.go:422] ha-589200-m03 apiserver status = Running (err=<nil>)
	I0415 18:08:29.530798    1136 status.go:257] ha-589200-m03 status: &{Name:ha-589200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:08:29.530798    1136 status.go:255] checking status of ha-589200-m04 ...
	I0415 18:08:29.549830    1136 cli_runner.go:164] Run: docker container inspect ha-589200-m04 --format={{.State.Status}}
	I0415 18:08:29.725154    1136 status.go:330] ha-589200-m04 host status = "Running" (err=<nil>)
	I0415 18:08:29.725154    1136 host.go:66] Checking if "ha-589200-m04" exists ...
	I0415 18:08:29.734118    1136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-589200-m04
	I0415 18:08:29.899361    1136 host.go:66] Checking if "ha-589200-m04" exists ...
	I0415 18:08:29.911350    1136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:08:29.920355    1136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-589200-m04
	I0415 18:08:30.116206    1136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51988 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-589200-m04\id_rsa Username:docker}
	I0415 18:08:30.245660    1136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:08:30.270980    1136 status.go:257] ha-589200-m04 status: &{Name:ha-589200-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (15.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.5524559s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 node start m02 -v=7 --alsologtostderr: (56.7804601s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr: (4.1149472s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (61.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0415 18:09:37.148359   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.4878174s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (261.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-589200 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-589200 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-589200 -v=7 --alsologtostderr: (39.3041266s)
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-589200 --wait=true -v=7 --alsologtostderr
E0415 18:10:36.965100   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:11:04.776437   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:13:14.017444   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
ha_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-589200 --wait=true -v=7 --alsologtostderr: (3m41.5376223s)
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-589200
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (261.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (22.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 node delete m03 -v=7 --alsologtostderr: (18.9462271s)
ha_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr: (3.0319047s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (22.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.3315148s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 stop -v=7 --alsologtostderr: (36.6590561s)
ha_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr: exit status 7 (795.026ms)

                                                
                                                
-- stdout --
	ha-589200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-589200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-589200-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:15:00.290650   13096 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:15:00.376394   13096 out.go:291] Setting OutFile to fd 628 ...
	I0415 18:15:00.376661   13096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:15:00.376661   13096 out.go:304] Setting ErrFile to fd 960...
	I0415 18:15:00.376661   13096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:15:00.389784   13096 out.go:298] Setting JSON to false
	I0415 18:15:00.389873   13096 mustload.go:65] Loading cluster: ha-589200
	I0415 18:15:00.389961   13096 notify.go:220] Checking for updates...
	I0415 18:15:00.390686   13096 config.go:182] Loaded profile config "ha-589200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:15:00.390786   13096 status.go:255] checking status of ha-589200 ...
	I0415 18:15:00.410944   13096 cli_runner.go:164] Run: docker container inspect ha-589200 --format={{.State.Status}}
	I0415 18:15:00.577602   13096 status.go:330] ha-589200 host status = "Stopped" (err=<nil>)
	I0415 18:15:00.577705   13096 status.go:343] host is not running, skipping remaining checks
	I0415 18:15:00.577705   13096 status.go:257] ha-589200 status: &{Name:ha-589200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:15:00.577705   13096 status.go:255] checking status of ha-589200-m02 ...
	I0415 18:15:00.596838   13096 cli_runner.go:164] Run: docker container inspect ha-589200-m02 --format={{.State.Status}}
	I0415 18:15:00.749750   13096 status.go:330] ha-589200-m02 host status = "Stopped" (err=<nil>)
	I0415 18:15:00.749750   13096 status.go:343] host is not running, skipping remaining checks
	I0415 18:15:00.749750   13096 status.go:257] ha-589200-m02 status: &{Name:ha-589200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:15:00.749750   13096 status.go:255] checking status of ha-589200-m04 ...
	I0415 18:15:00.780815   13096 cli_runner.go:164] Run: docker container inspect ha-589200-m04 --format={{.State.Status}}
	I0415 18:15:00.953071   13096 status.go:330] ha-589200-m04 host status = "Stopped" (err=<nil>)
	I0415 18:15:00.953071   13096 status.go:343] host is not running, skipping remaining checks
	I0415 18:15:00.953071   13096 status.go:257] ha-589200-m04 status: &{Name:ha-589200-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (113.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-589200 --wait=true -v=7 --alsologtostderr --driver=docker
E0415 18:15:36.978501   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
ha_test.go:560: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-589200 --wait=true -v=7 --alsologtostderr --driver=docker: (1m49.7147801s)
ha_test.go:566: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr: (3.0236245s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (113.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.3015203s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-589200 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-589200 --control-plane -v=7 --alsologtostderr: (1m14.6092462s)
ha_test.go:611: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr
E0415 18:18:14.033042   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
ha_test.go:611: (dbg) Done: out/minikube-windows-amd64.exe -p ha-589200 status -v=7 --alsologtostderr: (4.2092141s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.349788s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.35s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (67.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-319700 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-319700 --driver=docker: (1m7.575499s)
--- PASS: TestImageBuild/serial/Setup (67.58s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-319700
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-319700: (3.9341397s)
--- PASS: TestImageBuild/serial/NormalBuild (3.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-319700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-319700: (2.8514727s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (2.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-319700
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-319700: (2.0269262s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (2.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-319700
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-319700: (2.4466803s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.45s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-289900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0415 18:20:36.993132   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-289900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m23.0076593s)
--- PASS: TestJSONOutput/start/Command (83.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-289900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-289900 --output=json --user=testUser: (1.7647879s)
--- PASS: TestJSONOutput/pause/Command (1.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-289900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-289900 --output=json --user=testUser: (1.5725666s)
--- PASS: TestJSONOutput/unpause/Command (1.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.51s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-289900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-289900 --output=json --user=testUser: (7.5059469s)
--- PASS: TestJSONOutput/stop/Command (7.51s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.37s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-452300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-452300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (261.3047ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"049974fc-be4d-47a1-ba9e-143f44454d85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-452300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b62677f-7232-4495-add5-f7b717d7badb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"3772d6d0-f81d-4a90-adf6-eddf45122cfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"26fa1e40-51b1-4882-8598-667ce8d55f11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"b2f88938-1ad9-4acb-b6bb-ebb44defae66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18634"}}
	{"specversion":"1.0","id":"924e2833-fbce-4466-9943-1d2fc3810534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c63afe7-be0b-4694-99c1-0b12854faced","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:21:40.284392    2968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-452300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-452300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-452300: (1.1092787s)
--- PASS: TestErrorJSONOutput (1.37s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (77.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-778000 --network=
E0415 18:22:00.181333   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-778000 --network=: (1m11.8912689s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-778000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-778000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-778000: (5.2389446s)
--- PASS: TestKicCustomNetwork/create_custom_network (77.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (77.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-489700 --network=bridge
E0415 18:23:14.052921   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-489700 --network=bridge: (1m12.9373726s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-489700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-489700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-489700: (4.5440143s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (77.65s)

                                                
                                    
x
+
TestKicExistingNetwork (78.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-777300 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-777300 --network=existing-network: (1m12.9166946s)
helpers_test.go:175: Cleaning up "existing-network-777300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-777300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-777300: (4.6902132s)
--- PASS: TestKicExistingNetwork (78.91s)

                                                
                                    
x
+
TestKicCustomSubnet (78.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-540100 --subnet=192.168.60.0/24
E0415 18:25:36.996231   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 18:26:17.205632   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-540100 --subnet=192.168.60.0/24: (1m12.8655407s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-540100 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-540100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-540100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-540100: (5.1389841s)
--- PASS: TestKicCustomSubnet (78.21s)

                                                
                                    
x
+
TestKicStaticIP (79.94s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-281300 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-281300 --static-ip=192.168.200.200: (1m14.0428074s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-281300 ip
helpers_test.go:175: Cleaning up "static-ip-281300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-281300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-281300: (5.2500804s)
--- PASS: TestKicStaticIP (79.94s)

                                                
                                    
x
+
TestMainNoArgs (0.24s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.24s)

                                                
                                    
x
+
TestMinikubeProfile (147s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-838800 --driver=docker
E0415 18:28:14.053183   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-838800 --driver=docker: (1m7.4586263s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-838800 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-838800 --driver=docker: (1m2.9241159s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-838800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.36078s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-838800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.3518253s)
helpers_test.go:175: Cleaning up "second-838800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-838800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-838800: (5.5958325s)
helpers_test.go:175: Cleaning up "first-838800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-838800
E0415 18:30:37.013088   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-838800: (5.456801s)
--- PASS: TestMinikubeProfile (147.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-265900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-265900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (19.022208s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-265900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-265900 ssh -- ls /minikube-host: (1.1609724s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-265900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-265900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (18.0904399s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.1s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-265900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-265900 ssh -- ls /minikube-host: (1.103592s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.10s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (3.97s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-265900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-265900 --alsologtostderr -v=5: (3.9699731s)
--- PASS: TestMountStart/serial/DeleteFirst (3.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.15s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-265900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-265900 ssh -- ls /minikube-host: (1.1478873s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.15s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.54s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-265900
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-265900: (2.5427969s)
--- PASS: TestMountStart/serial/Stop (2.54s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (13.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-265900
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-265900: (12.3446322s)
--- PASS: TestMountStart/serial/RestartStopped (13.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.11s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-265900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-265900 ssh -- ls /minikube-host: (1.1120727s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.11s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (146.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-457000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0415 18:33:14.078289   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-457000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m23.8084646s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr: (2.3585231s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (146.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (24.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- rollout status deployment/busybox: (17.7877875s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-546b5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-546b5 -- nslookup kubernetes.io: (1.7590726s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-pw88j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-pw88j -- nslookup kubernetes.io: (1.5560554s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-546b5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-pw88j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-546b5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-pw88j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (24.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-546b5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-546b5 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-pw88j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-457000 -- exec busybox-7fdf7869d9-pw88j -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.55s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-457000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-457000 -v 3 --alsologtostderr: (50.1264128s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr: (2.9954696s)
--- PASS: TestMultiNode/serial/AddNode (53.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-457000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0415 18:35:37.023698   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.7521605s)
--- PASS: TestMultiNode/serial/ProfileList (1.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (39.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 status --output json --alsologtostderr: (2.7824071s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp testdata\cp-test.txt multinode-457000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp testdata\cp-test.txt multinode-457000:/home/docker/cp-test.txt: (1.1246745s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test.txt": (1.1105867s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1428148595\001\cp-test_multinode-457000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1428148595\001\cp-test_multinode-457000.txt: (1.1462929s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test.txt": (1.1254471s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000:/home/docker/cp-test.txt multinode-457000-m02:/home/docker/cp-test_multinode-457000_multinode-457000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000:/home/docker/cp-test.txt multinode-457000-m02:/home/docker/cp-test_multinode-457000_multinode-457000-m02.txt: (1.6522383s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test.txt": (1.0967992s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test_multinode-457000_multinode-457000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test_multinode-457000_multinode-457000-m02.txt": (1.1491032s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000:/home/docker/cp-test.txt multinode-457000-m03:/home/docker/cp-test_multinode-457000_multinode-457000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000:/home/docker/cp-test.txt multinode-457000-m03:/home/docker/cp-test_multinode-457000_multinode-457000-m03.txt: (1.6252993s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test.txt": (1.1240847s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test_multinode-457000_multinode-457000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test_multinode-457000_multinode-457000-m03.txt": (1.1175499s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp testdata\cp-test.txt multinode-457000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp testdata\cp-test.txt multinode-457000-m02:/home/docker/cp-test.txt: (1.1289766s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test.txt": (1.1100854s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1428148595\001\cp-test_multinode-457000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1428148595\001\cp-test_multinode-457000-m02.txt: (1.1374679s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test.txt": (1.1044367s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m02:/home/docker/cp-test.txt multinode-457000:/home/docker/cp-test_multinode-457000-m02_multinode-457000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m02:/home/docker/cp-test.txt multinode-457000:/home/docker/cp-test_multinode-457000-m02_multinode-457000.txt: (1.6289553s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test.txt": (1.1429812s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test_multinode-457000-m02_multinode-457000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test_multinode-457000-m02_multinode-457000.txt": (1.1353723s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m02:/home/docker/cp-test.txt multinode-457000-m03:/home/docker/cp-test_multinode-457000-m02_multinode-457000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m02:/home/docker/cp-test.txt multinode-457000-m03:/home/docker/cp-test_multinode-457000-m02_multinode-457000-m03.txt: (1.6488255s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test.txt": (1.14975s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test_multinode-457000-m02_multinode-457000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test_multinode-457000-m02_multinode-457000-m03.txt": (1.1249722s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp testdata\cp-test.txt multinode-457000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp testdata\cp-test.txt multinode-457000-m03:/home/docker/cp-test.txt: (1.1436525s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test.txt": (1.1211513s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1428148595\001\cp-test_multinode-457000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1428148595\001\cp-test_multinode-457000-m03.txt: (1.0901564s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test.txt": (1.1038751s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m03:/home/docker/cp-test.txt multinode-457000:/home/docker/cp-test_multinode-457000-m03_multinode-457000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m03:/home/docker/cp-test.txt multinode-457000:/home/docker/cp-test_multinode-457000-m03_multinode-457000.txt: (1.6542104s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test.txt": (1.100736s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test_multinode-457000-m03_multinode-457000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000 "sudo cat /home/docker/cp-test_multinode-457000-m03_multinode-457000.txt": (1.126344s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m03:/home/docker/cp-test.txt multinode-457000-m02:/home/docker/cp-test_multinode-457000-m03_multinode-457000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 cp multinode-457000-m03:/home/docker/cp-test.txt multinode-457000-m02:/home/docker/cp-test_multinode-457000-m03_multinode-457000-m02.txt: (1.6573952s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m03 "sudo cat /home/docker/cp-test.txt": (1.1214087s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test_multinode-457000-m03_multinode-457000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 ssh -n multinode-457000-m02 "sudo cat /home/docker/cp-test_multinode-457000-m03_multinode-457000-m02.txt": (1.1471453s)
--- PASS: TestMultiNode/serial/CopyFile (39.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (6.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 node stop m03: (2.13526s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-457000 status: exit status 7 (2.1680613s)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-457000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-457000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:36:20.320809    6888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr: exit status 7 (2.208054s)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-457000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-457000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:36:22.485075   12444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:36:22.569507   12444 out.go:291] Setting OutFile to fd 656 ...
	I0415 18:36:22.570561   12444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:36:22.570561   12444 out.go:304] Setting ErrFile to fd 712...
	I0415 18:36:22.570561   12444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:36:22.583508   12444 out.go:298] Setting JSON to false
	I0415 18:36:22.584078   12444 mustload.go:65] Loading cluster: multinode-457000
	I0415 18:36:22.584078   12444 notify.go:220] Checking for updates...
	I0415 18:36:22.584861   12444 config.go:182] Loaded profile config "multinode-457000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:36:22.584861   12444 status.go:255] checking status of multinode-457000 ...
	I0415 18:36:22.604898   12444 cli_runner.go:164] Run: docker container inspect multinode-457000 --format={{.State.Status}}
	I0415 18:36:22.765821   12444 status.go:330] multinode-457000 host status = "Running" (err=<nil>)
	I0415 18:36:22.766244   12444 host.go:66] Checking if "multinode-457000" exists ...
	I0415 18:36:22.778111   12444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-457000
	I0415 18:36:22.968719   12444 host.go:66] Checking if "multinode-457000" exists ...
	I0415 18:36:22.984443   12444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:36:22.995061   12444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-457000
	I0415 18:36:23.155945   12444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53223 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-457000\id_rsa Username:docker}
	I0415 18:36:23.281543   12444 ssh_runner.go:195] Run: systemctl --version
	I0415 18:36:23.310084   12444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:36:23.342981   12444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-457000
	I0415 18:36:23.513766   12444 kubeconfig.go:125] found "multinode-457000" server: "https://127.0.0.1:53222"
	I0415 18:36:23.513766   12444 api_server.go:166] Checking apiserver status ...
	I0415 18:36:23.528130   12444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 18:36:23.564567   12444 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2284/cgroup
	I0415 18:36:23.584399   12444 api_server.go:182] apiserver freezer: "7:freezer:/docker/ee311342065ef5959b95089b72f95b15e47f438dd1843e05410e182a09e69bfc/kubepods/burstable/pod36d537a7c7db52c83b26fb5f7ca6cfc9/7b732e6661e19aee73beeab2a305e0426e0885b706e05ea49e53da911eeb9c1f"
	I0415 18:36:23.596261   12444 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ee311342065ef5959b95089b72f95b15e47f438dd1843e05410e182a09e69bfc/kubepods/burstable/pod36d537a7c7db52c83b26fb5f7ca6cfc9/7b732e6661e19aee73beeab2a305e0426e0885b706e05ea49e53da911eeb9c1f/freezer.state
	I0415 18:36:23.614955   12444 api_server.go:204] freezer state: "THAWED"
	I0415 18:36:23.615024   12444 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53222/healthz ...
	I0415 18:36:23.626350   12444 api_server.go:279] https://127.0.0.1:53222/healthz returned 200:
	ok
	I0415 18:36:23.626350   12444 status.go:422] multinode-457000 apiserver status = Running (err=<nil>)
	I0415 18:36:23.626350   12444 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:36:23.627334   12444 status.go:255] checking status of multinode-457000-m02 ...
	I0415 18:36:23.644085   12444 cli_runner.go:164] Run: docker container inspect multinode-457000-m02 --format={{.State.Status}}
	I0415 18:36:23.820296   12444 status.go:330] multinode-457000-m02 host status = "Running" (err=<nil>)
	I0415 18:36:23.820296   12444 host.go:66] Checking if "multinode-457000-m02" exists ...
	I0415 18:36:23.833558   12444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-457000-m02
	I0415 18:36:24.007929   12444 host.go:66] Checking if "multinode-457000-m02" exists ...
	I0415 18:36:24.019757   12444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 18:36:24.027405   12444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-457000-m02
	I0415 18:36:24.194498   12444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53270 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-457000-m02\id_rsa Username:docker}
	I0415 18:36:24.330709   12444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 18:36:24.355633   12444 status.go:257] multinode-457000-m02 status: &{Name:multinode-457000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:36:24.355633   12444 status.go:255] checking status of multinode-457000-m03 ...
	I0415 18:36:24.378265   12444 cli_runner.go:164] Run: docker container inspect multinode-457000-m03 --format={{.State.Status}}
	I0415 18:36:24.550872   12444 status.go:330] multinode-457000-m03 host status = "Stopped" (err=<nil>)
	I0415 18:36:24.550872   12444 status.go:343] host is not running, skipping remaining checks
	I0415 18:36:24.551415   12444 status.go:257] multinode-457000-m03 status: &{Name:multinode-457000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (6.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (20.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 node start m03 -v=7 --alsologtostderr: (17.3990155s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 status -v=7 --alsologtostderr: (2.7321134s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (20.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (107.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-457000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-457000
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-457000: (25.9696969s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-457000 --wait=true -v=8 --alsologtostderr
E0415 18:38:14.092886   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-457000 --wait=true -v=8 --alsologtostderr: (1m20.8204231s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-457000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (107.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (13.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 node delete m03
E0415 18:38:40.242895   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 node delete m03: (10.7932867s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr: (2.0925108s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (13.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 stop: (24.1894913s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-457000 status: exit status 7 (625.1714ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-457000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:39:09.808255     812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr: exit status 7 (592.6738ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-457000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:39:10.427985    7520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0415 18:39:10.509207    7520 out.go:291] Setting OutFile to fd 864 ...
	I0415 18:39:10.510206    7520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:39:10.510206    7520 out.go:304] Setting ErrFile to fd 628...
	I0415 18:39:10.510206    7520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 18:39:10.523217    7520 out.go:298] Setting JSON to false
	I0415 18:39:10.523217    7520 mustload.go:65] Loading cluster: multinode-457000
	I0415 18:39:10.523217    7520 notify.go:220] Checking for updates...
	I0415 18:39:10.524211    7520 config.go:182] Loaded profile config "multinode-457000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 18:39:10.524211    7520 status.go:255] checking status of multinode-457000 ...
	I0415 18:39:10.542209    7520 cli_runner.go:164] Run: docker container inspect multinode-457000 --format={{.State.Status}}
	I0415 18:39:10.707312    7520 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0415 18:39:10.707312    7520 status.go:343] host is not running, skipping remaining checks
	I0415 18:39:10.707312    7520 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 18:39:10.707312    7520 status.go:255] checking status of multinode-457000-m02 ...
	I0415 18:39:10.724340    7520 cli_runner.go:164] Run: docker container inspect multinode-457000-m02 --format={{.State.Status}}
	I0415 18:39:10.891849    7520 status.go:330] multinode-457000-m02 host status = "Stopped" (err=<nil>)
	I0415 18:39:10.891849    7520 status.go:343] host is not running, skipping remaining checks
	I0415 18:39:10.891849    7520 status.go:257] multinode-457000-m02 status: &{Name:multinode-457000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (70.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-457000 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-457000 --wait=true -v=8 --alsologtostderr --driver=docker: (1m8.4387147s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-457000 status --alsologtostderr: (1.9784118s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (70.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (74.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-457000
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-457000-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-457000-m02 --driver=docker: exit status 14 (273.774ms)

                                                
                                                
-- stdout --
	* [multinode-457000-m02] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:40:22.088079   14576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Profile name 'multinode-457000-m02' is duplicated with machine name 'multinode-457000-m02' in profile 'multinode-457000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-457000-m03 --driver=docker
E0415 18:40:37.037390   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-457000-m03 --driver=docker: (1m7.3061166s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-457000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-457000: exit status 80 (1.16806s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-457000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:41:29.681674    8300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-457000-m03 already exists in multinode-457000-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_22.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-457000-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-457000-m03: (5.6574433s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (74.64s)

                                                
                                    
x
+
TestPreload (161.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-383700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E0415 18:42:57.266284   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 18:43:14.094816   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-383700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (1m46.9012823s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-383700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-383700 image pull gcr.io/k8s-minikube/busybox: (2.0603159s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-383700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-383700: (12.4411873s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-383700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-383700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (33.0332123s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-383700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-383700 image list: (1.3586686s)
helpers_test.go:175: Cleaning up "test-preload-383700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-383700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-383700: (5.4470533s)
--- PASS: TestPreload (161.24s)

                                                
                                    
x
+
TestScheduledStopWindows (138.14s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-553700 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-553700 --memory=2048 --driver=docker: (1m7.657103s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-553700 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-553700 --schedule 5m: (1.3631646s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-553700 -n scheduled-stop-553700
E0415 18:45:37.062040   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-553700 -n scheduled-stop-553700: (1.3049965s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-553700 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-553700 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.1370512s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-553700 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-553700 --schedule 5s: (1.3909678s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-553700
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-553700: exit status 7 (432.2191ms)

                                                
                                                
-- stdout --
	scheduled-stop-553700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:46:40.314857    9384 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-553700 -n scheduled-stop-553700
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-553700 -n scheduled-stop-553700: exit status 7 (462.3837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:46:40.750502   10188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-553700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-553700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-553700: (4.3785908s)
--- PASS: TestScheduledStopWindows (138.14s)

                                                
                                    
x
+
TestInsufficientStorage (46.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-218800 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-218800 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (40.0359351s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9dee502e-253e-4620-a128-a5098b15cddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-218800] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"55f155a9-722d-4534-94e1-835f07456b2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"0f708541-621e-40b0-b8fd-172c264fdb59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d5d4189a-31b4-4c49-94e1-4dfb0494f330","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"7d571474-778e-4530-b3bf-7761ad7c8ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18634"}}
	{"specversion":"1.0","id":"d63379e3-db8a-4bba-8b2a-9d93d0b72a56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f6ba8d8-2d46-471b-8318-e937ad125993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1a26bce1-1267-4179-bf99-19ed164f6c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"079ea447-fc3a-46e4-abf9-b888786a5f88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4af35ccb-62f1-429f-ab10-c0f2526f0485","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"d02482e2-1f03-4cdf-879a-a689b3bafe37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-218800\" primary control-plane node in \"insufficient-storage-218800\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b923cfc4-c126-462e-9b56-108fffee3845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713176859-18634 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3052cb9-9b2d-4c8b-94e3-ea5c97efdf20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"56c014ed-6469-4a50-b289-44225ded7a0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:46:45.597745    8200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-218800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-218800 --output=json --layout=cluster: exit status 7 (1.2057332s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-218800","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-218800","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:47:25.634740    3004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0415 18:47:26.661290    3004 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-218800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-218800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-218800 --output=json --layout=cluster: exit status 7 (1.1754678s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-218800","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-218800","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:47:26.843014    9432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0415 18:47:27.839659    9432 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-218800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E0415 18:47:27.875151    9432 status.go:560] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-218800\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-218800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-218800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-218800: (4.324297s)
--- PASS: TestInsufficientStorage (46.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (244.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.1979417143.exe start -p running-upgrade-465600 --memory=2200 --vm-driver=docker
E0415 18:50:37.066813   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.1979417143.exe start -p running-upgrade-465600 --memory=2200 --vm-driver=docker: (1m28.6160357s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-465600 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-465600 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m26.2021852s)
helpers_test.go:175: Cleaning up "running-upgrade-465600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-465600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-465600: (8.8116511s)
--- PASS: TestRunningBinaryUpgrade (244.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (500.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-023700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-023700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (1m49.4766184s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-023700
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-023700: (13.3025076s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-023700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-023700 status --format={{.Host}}: exit status 7 (469.8106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:57:02.272847    5756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-023700 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-023700 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker: (5m34.7998898s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-023700 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-023700 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-023700 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (296.995ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-023700] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:02:37.741085    3944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-023700
	    minikube start -p kubernetes-upgrade-023700 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0237002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-023700 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-023700 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-023700 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker: (35.0593402s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-023700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-023700
E0415 19:03:14.162011   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-023700: (7.1560258s)
--- PASS: TestKubernetesUpgrade (500.74s)

                                                
                                    
x
+
TestMissingContainerUpgrade (392.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.4181286834.exe start -p missing-upgrade-383200 --memory=2200 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.4181286834.exe start -p missing-upgrade-383200 --memory=2200 --driver=docker: (3m41.5786311s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-383200
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-383200: (11.8363053s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-383200
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-383200 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-383200 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m30.9189914s)
helpers_test.go:175: Cleaning up "missing-upgrade-383200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-383200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-383200: (6.3116751s)
--- PASS: TestMissingContainerUpgrade (392.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (329.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.284524178.exe start -p stopped-upgrade-383200 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.284524178.exe start -p stopped-upgrade-383200 --memory=2200 --vm-driver=docker: (3m42.7270974s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.284524178.exe -p stopped-upgrade-383200 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.284524178.exe -p stopped-upgrade-383200 stop: (13.7888167s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-383200 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-383200 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m32.9347242s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (329.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (525.9946ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-344600] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:47:50.482357    8420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (188.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --driver=docker
E0415 18:48:14.114542   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --driver=docker: (3m6.3640965s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-344600 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-344600 status -o json: (1.7256341s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (188.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --no-kubernetes --driver=docker: (22.8932603s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-344600 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-344600 status -o json: exit status 2 (1.2030966s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-344600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:51:21.955747    4068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-344600
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-344600: (5.2806705s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --no-kubernetes --driver=docker: (23.119177s)
--- PASS: TestNoKubernetes/serial/Start (23.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-344600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-344600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.364149s)

                                                
                                                
** stderr ** 
	W0415 18:51:51.604970   12612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (4.1670743s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (4.2651928s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (16.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-344600
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-344600: (16.3857055s)
--- PASS: TestNoKubernetes/serial/Stop (16.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (18.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-344600 --driver=docker: (18.0454966s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (18.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-344600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-344600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.4269101s)

                                                
                                                
** stderr ** 
	W0415 18:52:35.826079    1332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.43s)

                                                
                                    
x
+
TestPause/serial/Start (139.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-176700 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-176700 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m19.8847373s)
--- PASS: TestPause/serial/Start (139.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (6.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-383200
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-383200: (6.0155159s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (6.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-176700 --alsologtostderr -v=1 --driver=docker
E0415 18:55:20.300337   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-176700 --alsologtostderr -v=1 --driver=docker: (46.3864196s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.41s)

                                                
                                    
x
+
TestPause/serial/Pause (6.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-176700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-176700 --alsologtostderr -v=5: (6.5317144s)
--- PASS: TestPause/serial/Pause (6.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-176700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-176700 --output=json --layout=cluster: exit status 2 (1.3897549s)

                                                
                                                
-- stdout --
	{"Name":"pause-176700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-176700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 18:55:55.482777    4456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (1.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (4.14s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-176700 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-176700 --alsologtostderr -v=5: (4.1359336s)
--- PASS: TestPause/serial/Unpause (4.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m35.4398782s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (178.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
E0415 18:58:14.138435   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (2m58.5372702s)
--- PASS: TestNetworkPlugins/group/calico/Start (178.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-008800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-008800 "pgrep -a kubelet": (1.2551685s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (17.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zn96s" [4a73cb67-59f0-4e0e-a357-4df6f7da9496] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zn96s" [4a73cb67-59f0-4e0e-a357-4df6f7da9496] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 17.0214657s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (17.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-008800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (104.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m44.2092802s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (104.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5qls4" [b728d3f3-845d-4664-a945-9408f4284bd3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0232448s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-008800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-008800 "pgrep -a kubelet": (1.3967099s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (20.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f8ts6" [13520ff2-50aa-4790-85c9-f410693c3220] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0415 19:00:37.102890   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-f8ts6" [13520ff2-50aa-4790-85c9-f410693c3220] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 20.0206882s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (20.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (89.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m29.6555478s)
--- PASS: TestNetworkPlugins/group/false/Start (89.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-008800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-008800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-008800 "pgrep -a kubelet": (1.2069368s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (19.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6vtqc" [43f38960-c4db-46d3-accd-0ecfa71c275f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6vtqc" [43f38960-c4db-46d3-accd-0ecfa71c275f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 19.0195223s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (19.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-008800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (125.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (2m5.8400893s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (125.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-008800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-008800 "pgrep -a kubelet": (1.1980582s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (25.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r67mb" [81b32ea4-d2a1-49fc-94aa-3245bdefd9d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r67mb" [81b32ea4-d2a1-49fc-94aa-3245bdefd9d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 25.0102835s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (25.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-008800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (125.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (2m5.62384s)
--- PASS: TestNetworkPlugins/group/flannel/Start (125.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E0415 19:03:30.418181   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:30.433373   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:30.449239   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:30.480497   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:30.526944   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:30.618547   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:30.793078   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:31.124002   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:31.772829   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:33.058939   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:03:35.626506   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m40.7876639s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (114.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m54.0920754s)
--- PASS: TestNetworkPlugins/group/bridge/Start (114.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tkzpr" [b0c111fc-89c2-416a-9ebc-266d10de6009] Running
E0415 19:04:11.993349   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0228274s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-008800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-008800 "pgrep -a kubelet": (1.5006983s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (18.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g68cr" [432c81a8-751a-4699-b12c-0b9b7072dc65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g68cr" [432c81a8-751a-4699-b12c-0b9b7072dc65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 18.0142213s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (18.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-008800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-008800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-008800 "pgrep -a kubelet": (1.3156789s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-42gwn" [a9e02759-202c-47f2-8163-399954ff890d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-42gwn" [a9e02759-202c-47f2-8163-399954ff890d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 18.029833s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-k28qt" [6bacb4c8-103c-4b21-a0fc-b0a5bd387a8b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0113491s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-008800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-008800 "pgrep -a kubelet": (1.3341418s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (17.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r9q7q" [e1c38a65-f891-4c12-86cd-e93c6ca37339] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r9q7q" [e1c38a65-f891-4c12-86cd-e93c6ca37339] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.0204635s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (17.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-008800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-008800 exec deployment/netcat -- nslookup kubernetes.default
E0415 19:05:37.106239   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (103.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-008800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m43.1376251s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (103.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-008800 "pgrep -a kubelet"
E0415 19:05:47.229657   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-008800\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-008800 "pgrep -a kubelet": (1.7849232s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (25.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mz7dt" [f65749da-506d-495f-a5d1-423cbb41830e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0415 19:06:07.715519   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-008800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-mz7dt" [f65749da-506d-495f-a5d1-423cbb41830e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 25.016786s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (25.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-008800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0415 19:06:14.890314   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (210.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-075400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0415 19:06:38.740996   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:38.756157   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:38.771744   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:38.803756   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:38.850762   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:38.944382   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:39.117798   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:39.446195   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:40.091804   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:06:41.385538   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-075400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (3m30.2521426s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (210.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (136.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-523900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.30.0-rc.2
E0415 19:06:59.325523   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-523900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.30.0-rc.2: (2m16.4136924s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (136.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (109.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-362000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-362000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.29.3: (1m49.6065394s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (109.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-008800 "pgrep -a kubelet"
E0415 19:07:26.975544   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-008800\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-008800 "pgrep -a kubelet": (1.7516372s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (39.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-008800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kubenet-008800 replace --force -f testdata\netcat-deployment.yaml: (2.3226334s)
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6md5b" [4abf4a65-1b37-469e-a4bd-acc274fa37b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0415 19:07:37.217135   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-008800\client.crt: The system cannot find the path specified.
E0415 19:07:57.713339   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-008800\client.crt: The system cannot find the path specified.
E0415 19:08:00.780203   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-6md5b" [4abf4a65-1b37-469e-a4bd-acc274fa37b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 37.0236564s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (39.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-008800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-008800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.39s)
E0415 19:15:02.904607   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:15:12.139529   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:15:14.766129   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:15:25.443532   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-008800\client.crt: The system cannot find the path specified.
E0415 19:15:30.702256   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:15:37.132625   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 19:15:39.926309   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:15:48.916219   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:16:16.739115   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-523900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [54ad9bc9-6971-41a5-a1b9-86fe5a664955] Pending
helpers_test.go:344: "busybox" [54ad9bc9-6971-41a5-a1b9-86fe5a664955] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [54ad9bc9-6971-41a5-a1b9-86fe5a664955] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.0164844s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-523900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-362000 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93c1705c-ef68-4786-b09d-6b03bbda3d9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0415 19:09:14.005441   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-008800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [93c1705c-ef68-4786-b09d-6b03bbda3d9b] Running
E0415 19:09:19.127448   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-008800\client.crt: The system cannot find the path specified.
E0415 19:09:22.718317   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0116693s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-362000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (119.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-923600 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-923600 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.29.3: (1m59.24363s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (119.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-523900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-523900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.4669916s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-523900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-362000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-362000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.1826359s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-362000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-523900 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-523900 --alsologtostderr -v=3: (14.652667s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-362000 --alsologtostderr -v=3
E0415 19:09:29.382781   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-362000 --alsologtostderr -v=3: (13.4425312s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-523900 -n no-preload-523900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-523900 -n no-preload-523900: exit status 7 (414.0837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:09:41.674957   13780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-523900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-362000 -n embed-certs-362000: exit status 7 (463.2975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:09:42.544458    6744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-362000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (283.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-523900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-523900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.30.0-rc.2: (4m42.2253804s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-523900 -n no-preload-523900
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-523900 -n no-preload-523900: (1.3678035s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (283.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (284.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-362000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.29.3
E0415 19:09:49.876773   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-362000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.29.3: (4m43.4456429s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-362000 -n embed-certs-362000: (1.309482s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (284.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-075400 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5c19bd2a-10b4-40d5-942a-f7bd207dde45] Pending
E0415 19:10:00.607745   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-008800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [5c19bd2a-10b4-40d5-942a-f7bd207dde45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0415 19:10:02.878820   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:02.894816   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:02.910832   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:02.943021   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:02.990822   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:03.085827   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:03.258163   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:03.586007   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:04.239002   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [5c19bd2a-10b4-40d5-942a-f7bd207dde45] Running
E0415 19:10:05.524327   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:08.095020   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.016506s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-075400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-075400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0415 19:10:12.113797   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:12.130002   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:12.145924   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:12.177433   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:12.224408   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:12.316185   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:12.485488   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:12.816195   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:13.223888   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-075400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.6595968s)
E0415 19:10:13.458996   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-075400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-075400 --alsologtostderr -v=3
E0415 19:10:14.748660   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:17.316525   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:22.443253   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:10:23.464699   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:10:25.426243   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-075400 --alsologtostderr -v=3: (12.8754257s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-075400 -n old-k8s-version-075400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-075400 -n old-k8s-version-075400: exit status 7 (450.0234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:10:26.857849    9180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-075400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-923600 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5729b08d-4d4f-4993-b572-bdbd2414a5b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5729b08d-4d4f-4993-b572-bdbd2414a5b0] Running
E0415 19:11:24.913713   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.0167382s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-923600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-923600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0415 19:11:29.998547   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-923600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.503985s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-923600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-923600 --alsologtostderr -v=3
E0415 19:11:34.140054   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:11:38.763769   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-923600 --alsologtostderr -v=3: (12.9201064s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600: exit status 7 (510.7542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:11:44.077345    3940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-923600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (283.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-923600 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.29.3
E0415 19:11:52.771187   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:00.356180   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-662500\client.crt: The system cannot find the path specified.
E0415 19:12:06.580758   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:12:10.961401   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:12:16.642332   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-008800\client.crt: The system cannot find the path specified.
E0415 19:12:30.809245   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:30.823939   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:30.839260   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:30.870099   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:30.917355   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:31.008646   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:31.182505   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:31.509363   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:32.150837   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:33.445984   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:36.008052   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:41.142496   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:44.464518   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-008800\client.crt: The system cannot find the path specified.
E0415 19:12:46.849179   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-008800\client.crt: The system cannot find the path specified.
E0415 19:12:51.385546   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:12:56.069922   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-008800\client.crt: The system cannot find the path specified.
E0415 19:13:11.867854   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:13:14.187463   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
E0415 19:13:30.442627   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-008800\client.crt: The system cannot find the path specified.
E0415 19:13:32.887813   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-008800\client.crt: The system cannot find the path specified.
E0415 19:13:52.833862   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
E0415 19:14:08.719754   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-923600 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.29.3: (4m42.4102273s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600: (1.3027919s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (283.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rwsjr" [8358b059-5bf4-4987-89a6-00666de06303] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0202303s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7glb2" [1b5c5db2-cd58-4a57-a387-9eca6eb0bd6c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0174245s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rwsjr" [8358b059-5bf4-4987-89a6-00666de06303] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0172252s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-523900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0415 19:14:36.633680   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-008800\client.crt: The system cannot find the path specified.
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7glb2" [1b5c5db2-cd58-4a57-a387-9eca6eb0bd6c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0239832s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-362000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-523900 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-523900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-523900 --alsologtostderr -v=1: (1.8049092s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-523900 -n no-preload-523900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-523900 -n no-preload-523900: exit status 2 (1.271976s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:14:39.470440    8360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-523900 -n no-preload-523900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-523900 -n no-preload-523900: exit status 2 (1.3368098s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:14:40.749105    1372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-523900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-523900 --alsologtostderr -v=1: (1.7599556s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-523900 -n no-preload-523900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-523900 -n no-preload-523900: (1.9261086s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-523900 -n no-preload-523900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-523900 -n no-preload-523900: (1.4884899s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (9.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-362000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-362000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-362000 --alsologtostderr -v=1: (1.8893634s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-362000 -n embed-certs-362000: exit status 2 (1.3314898s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:14:42.746292    6068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-362000 -n embed-certs-362000: exit status 2 (1.463s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:14:44.105808   10724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-362000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-362000 --alsologtostderr -v=1: (1.8279281s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-362000 -n embed-certs-362000: (1.8284959s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-362000 -n embed-certs-362000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-362000 -n embed-certs-362000: (1.4922257s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (9.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (81.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-003000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-003000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.30.0-rc.2: (1m21.2095069s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (81.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-003000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0415 19:16:17.381432   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-661400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-003000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9706044s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-003000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-003000 --alsologtostderr -v=3: (7.6333415s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-003000 -n newest-cni-003000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-003000 -n newest-cni-003000: exit status 7 (476.1821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:16:27.899979    6568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-003000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2rj9t" [dd2e8038-5fad-4d57-9b28-85cd1feef508] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.013465s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-003000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-003000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.30.0-rc.2: (31.7136988s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-003000 -n newest-cni-003000
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-003000 -n newest-cni-003000: (1.3031892s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2rj9t" [dd2e8038-5fad-4d57-9b28-85cd1feef508] Running
E0415 19:16:38.774732   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0198685s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-923600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-923600 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-923600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-923600 --alsologtostderr -v=1: (1.8229891s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600: exit status 2 (1.2698637s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:16:43.146996   10680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600: exit status 2 (1.3102891s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:16:44.425208    9460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-923600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-923600 --alsologtostderr -v=1: (1.9021833s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600: (1.6450804s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-923600 -n default-k8s-diff-port-923600: (1.4019074s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (9.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-003000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (8.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-003000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-003000 --alsologtostderr -v=1: (1.6603275s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-003000 -n newest-cni-003000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-003000 -n newest-cni-003000: exit status 2 (1.2614275s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:17:04.529083   15528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-003000 -n newest-cni-003000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-003000 -n newest-cni-003000: exit status 2 (1.2510906s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:17:05.785158   15828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-003000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-003000 --alsologtostderr -v=1: (1.5584088s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-003000 -n newest-cni-003000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-003000 -n newest-cni-003000: (1.5549072s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-003000 -n newest-cni-003000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-003000 -n newest-cni-003000: (1.321997s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (8.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pk692" [c8328615-f3d1-45a5-9623-fe438af54c86] Running
E0415 19:17:30.829216   11748 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-008800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0172266s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pk692" [c8328615-f3d1-45a5-9623-fe438af54c86] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0218489s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-075400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-075400 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-075400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-075400 --alsologtostderr -v=1: (1.7655762s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-075400 -n old-k8s-version-075400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-075400 -n old-k8s-version-075400: exit status 2 (1.3086391s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:17:44.210845   15168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-075400 -n old-k8s-version-075400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-075400 -n old-k8s-version-075400: exit status 2 (1.2562815s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0415 19:17:45.521233    7884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-075400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-075400 --alsologtostderr -v=1: (1.6622848s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-075400 -n old-k8s-version-075400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-075400 -n old-k8s-version-075400: (1.9825741s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-075400 -n old-k8s-version-075400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-075400 -n old-k8s-version-075400: (1.3239588s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (9.30s)

                                                
                                    

Test skip (27/345)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (33.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 31.0316ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5l6xk" [f0db4c53-910d-463d-8fac-8920e9dea90d] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.02127s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h6qwc" [8b759968-dbb4-4195-92a8-b722433e95b8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0306308s
addons_test.go:340: (dbg) Run:  kubectl --context addons-661400 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-661400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-661400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (22.9786532s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (33.31s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (28.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-661400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-661400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-661400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ed8c55dc-d66e-46ee-9cce-593018b577b1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ed8c55dc-d66e-46ee-9cce-593018b577b1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 26.0199142s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-661400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-661400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.1389634s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-661400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0415 17:49:26.083349   15392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (28.35s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-662500 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-662500 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 15720: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-662500 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-662500 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-f2w6f" [94c009e2-a17b-43eb-b2ce-23664c5392ad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-f2w6f" [94c009e2-a17b-43eb-b2ce-23664c5392ad] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.0199507s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (11.71s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (18.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-008800 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W0415 18:47:33.965023    1348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W0415 18:47:34.655730   14568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W0415 18:47:34.939747    6056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W0415 18:47:35.419931   11372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W0415 18:47:35.753314    3816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W0415 18:47:37.261745   10048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W0415 18:47:37.549999   10720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W0415 18:47:37.885665    4040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W0415 18:47:38.207285   11396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W0415 18:47:38.520598   10552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-008800" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W0415 18:47:40.420840    7936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W0415 18:47:40.689762    6940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W0415 18:47:41.324718    6348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W0415 18:47:41.605558    3936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W0415 18:47:41.864083   11560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-008800

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W0415 18:47:42.419798    2972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W0415 18:47:42.965350   11372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W0415 18:47:43.248865    2520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W0415 18:47:43.515554   14388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W0415 18:47:44.154456   12380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W0415 18:47:44.405834    1512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W0415 18:47:44.651628    9884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W0415 18:47:44.885211   15276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W0415 18:47:45.468543    4208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W0415 18:47:45.719521   16300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W0415 18:47:45.994329    3492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W0415 18:47:46.239363    9820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W0415 18:47:46.744562   11064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W0415 18:47:46.989887    7592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W0415 18:47:47.261226   14392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W0415 18:47:47.525407    8916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W0415 18:47:48.101932   10416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                

                                                
                                                
>>> host: crio config:
W0415 18:47:48.437948    9424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-008800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008800"

                                                
                                                
----------------------- debugLogs end: cilium-008800 [took: 16.3897794s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-008800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-008800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-008800: (1.7069957s)
--- SKIP: TestNetworkPlugins/group/cilium (18.10s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-782800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-782800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-782800: (1.5010706s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.50s)

                                                
                                    
Copied to clipboard