Test Report: Docker_Windows 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-19:35410
                    
                

Test fail (6/348)

Order failed test Duration
65 TestErrorSpam/setup 73.07
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 6.94
96 TestFunctional/parallel/ConfigCmd 2.08
306 TestPause/serial/VerifyDeletedResources 5.28
391 TestStartStop/group/old-k8s-version/serial/SecondStart 427.5
405 TestStartStop/group/no-preload/serial/Pause 37.73
x
+
TestErrorSpam/setup (73.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-755000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-755000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 --driver=docker: (1m13.0716085s)
error_spam_test.go:96: unexpected stderr: "W0719 03:40:12.200383    6532 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-755000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
- KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
- MINIKUBE_LOCATION=19302
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-755000" primary control-plane node in "nospam-755000" cluster
* Pulling base image v0.0.44-1721324606-19298 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-755000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0719 03:40:12.200383    6532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (73.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (6.94s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-365100
helpers_test.go:235: (dbg) docker inspect functional-365100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c476421a5f98abf91695501d64efd57b31b3a9994e4dc6374071775054e2ab2",
	        "Created": "2024-07-19T03:42:42.300036768Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 526482,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-19T03:42:42.941373894Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7bda27423b38cbebec7632cdf15a8fcb063ff209d17af249e6b3f1fbdb5fa681",
	        "ResolvConfPath": "/var/lib/docker/containers/5c476421a5f98abf91695501d64efd57b31b3a9994e4dc6374071775054e2ab2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c476421a5f98abf91695501d64efd57b31b3a9994e4dc6374071775054e2ab2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c476421a5f98abf91695501d64efd57b31b3a9994e4dc6374071775054e2ab2/hosts",
	        "LogPath": "/var/lib/docker/containers/5c476421a5f98abf91695501d64efd57b31b3a9994e4dc6374071775054e2ab2/5c476421a5f98abf91695501d64efd57b31b3a9994e4dc6374071775054e2ab2-json.log",
	        "Name": "/functional-365100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-365100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-365100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f809d06c22f9fe91498cf2f38a5ffcf661e683fab5ed0ffa6bf15518ff1c763a-init/diff:/var/lib/docker/overlay2/8afef3549fbfde76a8b1d15736e3430a7f83f1f1968778d28daa6047c0f61b28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f809d06c22f9fe91498cf2f38a5ffcf661e683fab5ed0ffa6bf15518ff1c763a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f809d06c22f9fe91498cf2f38a5ffcf661e683fab5ed0ffa6bf15518ff1c763a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f809d06c22f9fe91498cf2f38a5ffcf661e683fab5ed0ffa6bf15518ff1c763a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-365100",
	                "Source": "/var/lib/docker/volumes/functional-365100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-365100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-365100",
	                "name.minikube.sigs.k8s.io": "functional-365100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ca84353313cd3421540fe3d2a6e785d3804dfe50b69cf6c37e436711ad6a49d",
	            "SandboxKey": "/var/run/docker/netns/8ca84353313c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63170"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63171"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63173"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63174"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-365100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "fa486eeea11f5ce193657870cda3bc3fb6e118d2a22d539ebef283001d09cddd",
	                    "EndpointID": "5af179e1ca14bd6c913cbcec2f3c1fad05eb184729d9a51a646e40cb15b9de80",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-365100",
	                        "5c476421a5f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-365100 -n functional-365100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-365100 -n functional-365100: (1.4052509s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 logs -n 25: (2.8766856s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-755000 --log_dir                                     | nospam-755000     | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:41 UTC | 19 Jul 24 03:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-755000 --log_dir                                     | nospam-755000     | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:41 UTC | 19 Jul 24 03:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-755000 --log_dir                                     | nospam-755000     | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:41 UTC | 19 Jul 24 03:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-755000 --log_dir                                     | nospam-755000     | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:41 UTC | 19 Jul 24 03:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-755000 --log_dir                                     | nospam-755000     | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:41 UTC | 19 Jul 24 03:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-755000 --log_dir                                     | nospam-755000     | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:41 UTC | 19 Jul 24 03:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-755000 --log_dir                                     | nospam-755000     | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:41 UTC | 19 Jul 24 03:42 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-755000                                            | nospam-755000     | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:42 UTC | 19 Jul 24 03:42 UTC |
	| start   | -p functional-365100                                        | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:42 UTC | 19 Jul 24 03:43 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-365100                                        | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:43 UTC | 19 Jul 24 03:44 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-365100 cache add                                 | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-365100 cache add                                 | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-365100 cache add                                 | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-365100 cache add                                 | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | minikube-local-cache-test:functional-365100                 |                   |                   |         |                     |                     |
	| cache   | functional-365100 cache delete                              | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | minikube-local-cache-test:functional-365100                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	| ssh     | functional-365100 ssh sudo                                  | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-365100                                           | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-365100 ssh                                       | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-365100 cache reload                              | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	| ssh     | functional-365100 ssh                                       | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-365100 kubectl --                                | functional-365100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | --context functional-365100                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:43:35
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:43:35.148232    3032 out.go:291] Setting OutFile to fd 788 ...
	I0719 03:43:35.149238    3032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:43:35.149238    3032 out.go:304] Setting ErrFile to fd 700...
	I0719 03:43:35.149238    3032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:43:35.179488    3032 out.go:298] Setting JSON to false
	I0719 03:43:35.183328    3032 start.go:129] hostinfo: {"hostname":"minikube3","uptime":177600,"bootTime":1721183014,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 03:43:35.183847    3032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:43:35.191195    3032 out.go:177] * [functional-365100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:43:35.194640    3032 notify.go:220] Checking for updates...
	I0719 03:43:35.195688    3032 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:43:35.200286    3032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:43:35.202972    3032 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 03:43:35.206164    3032 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:43:35.210657    3032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:43:35.214212    3032 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:43:35.214941    3032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:43:35.536137    3032 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 03:43:35.547728    3032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:43:35.893051    3032 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:86 SystemTime:2024-07-19 03:43:35.85372656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:43:35.898343    3032 out.go:177] * Using the docker driver based on existing profile
	I0719 03:43:35.902769    3032 start.go:297] selected driver: docker
	I0719 03:43:35.902769    3032 start.go:901] validating driver "docker" against &{Name:functional-365100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-365100 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:43:35.903489    3032 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:43:35.922638    3032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:43:36.252912    3032 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:86 SystemTime:2024-07-19 03:43:36.213215817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:43:36.367990    3032 cni.go:84] Creating CNI manager for ""
	I0719 03:43:36.367990    3032 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:43:36.367990    3032 start.go:340] cluster config:
	{Name:functional-365100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-365100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:43:36.373614    3032 out.go:177] * Starting "functional-365100" primary control-plane node in "functional-365100" cluster
	I0719 03:43:36.377769    3032 cache.go:121] Beginning downloading kic base image for docker with docker
	I0719 03:43:36.380158    3032 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0719 03:43:36.384372    3032 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:43:36.384430    3032 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 03:43:36.384477    3032 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:43:36.384477    3032 cache.go:56] Caching tarball of preloaded images
	I0719 03:43:36.385044    3032 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 03:43:36.385299    3032 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 03:43:36.385580    3032 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\config.json ...
	W0719 03:43:36.589291    3032 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0719 03:43:36.589405    3032 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:43:36.589405    3032 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:43:36.589467    3032 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:43:36.589467    3032 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 03:43:36.589467    3032 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 03:43:36.589467    3032 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 03:43:36.590024    3032 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 03:43:36.590024    3032 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0719 03:43:36.590024    3032 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:43:36.604353    3032 image.go:273] response: 
	I0719 03:43:37.119202    3032 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0719 03:43:37.119280    3032 cache.go:194] Successfully downloaded all kic artifacts
	I0719 03:43:37.119566    3032 start.go:360] acquireMachinesLock for functional-365100: {Name:mk27eda741d747b06720243750438bd0d55300f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:43:37.120021    3032 start.go:364] duration metric: took 347.9µs to acquireMachinesLock for "functional-365100"
	I0719 03:43:37.120021    3032 start.go:96] Skipping create...Using existing machine configuration
	I0719 03:43:37.120021    3032 fix.go:54] fixHost starting: 
	I0719 03:43:37.139497    3032 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
	I0719 03:43:37.324120    3032 fix.go:112] recreateIfNeeded on functional-365100: state=Running err=<nil>
	W0719 03:43:37.324120    3032 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 03:43:37.327121    3032 out.go:177] * Updating the running docker "functional-365100" container ...
	I0719 03:43:37.331536    3032 machine.go:94] provisionDockerMachine start ...
	I0719 03:43:37.342136    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:37.530330    3032 main.go:141] libmachine: Using SSH client type: native
	I0719 03:43:37.531346    3032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 63170 <nil> <nil>}
	I0719 03:43:37.531346    3032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 03:43:37.695987    3032 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-365100
	
	I0719 03:43:37.696108    3032 ubuntu.go:169] provisioning hostname "functional-365100"
	I0719 03:43:37.708304    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:37.921755    3032 main.go:141] libmachine: Using SSH client type: native
	I0719 03:43:37.922430    3032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 63170 <nil> <nil>}
	I0719 03:43:37.922430    3032 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-365100 && echo "functional-365100" | sudo tee /etc/hostname
	I0719 03:43:38.110544    3032 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-365100
	
	I0719 03:43:38.122725    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:38.310761    3032 main.go:141] libmachine: Using SSH client type: native
	I0719 03:43:38.310761    3032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 63170 <nil> <nil>}
	I0719 03:43:38.310761    3032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-365100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-365100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-365100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:43:38.491147    3032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:43:38.491215    3032 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0719 03:43:38.491278    3032 ubuntu.go:177] setting up certificates
	I0719 03:43:38.491352    3032 provision.go:84] configureAuth start
	I0719 03:43:38.506997    3032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-365100
	I0719 03:43:38.696697    3032 provision.go:143] copyHostCerts
	I0719 03:43:38.696697    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0719 03:43:38.697772    3032 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0719 03:43:38.697772    3032 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0719 03:43:38.698485    3032 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0719 03:43:38.700015    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0719 03:43:38.700015    3032 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0719 03:43:38.700015    3032 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0719 03:43:38.700866    3032 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 03:43:38.702179    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0719 03:43:38.702179    3032 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0719 03:43:38.702179    3032 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0719 03:43:38.702952    3032 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 03:43:38.704604    3032 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-365100 san=[127.0.0.1 192.168.49.2 functional-365100 localhost minikube]
	I0719 03:43:38.952859    3032 provision.go:177] copyRemoteCerts
	I0719 03:43:38.966493    3032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:43:38.975382    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:39.154122    3032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
	I0719 03:43:39.277616    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 03:43:39.277616    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 03:43:39.331176    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 03:43:39.332625    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 03:43:39.380826    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 03:43:39.381651    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 03:43:39.437702    3032 provision.go:87] duration metric: took 946.2673ms to configureAuth
	I0719 03:43:39.438314    3032 ubuntu.go:193] setting minikube options for container-runtime
	I0719 03:43:39.439191    3032 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:43:39.457910    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:39.647116    3032 main.go:141] libmachine: Using SSH client type: native
	I0719 03:43:39.647116    3032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 63170 <nil> <nil>}
	I0719 03:43:39.647116    3032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 03:43:39.820654    3032 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0719 03:43:39.820745    3032 ubuntu.go:71] root file system type: overlay
	I0719 03:43:39.820990    3032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 03:43:39.832230    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:40.014162    3032 main.go:141] libmachine: Using SSH client type: native
	I0719 03:43:40.014722    3032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 63170 <nil> <nil>}
	I0719 03:43:40.014875    3032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 03:43:40.214474    3032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 03:43:40.225680    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:40.402325    3032 main.go:141] libmachine: Using SSH client type: native
	I0719 03:43:40.403405    3032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 63170 <nil> <nil>}
	I0719 03:43:40.403457    3032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 03:43:40.609254    3032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:43:40.609346    3032 machine.go:97] duration metric: took 3.2777129s to provisionDockerMachine
	I0719 03:43:40.609346    3032 start.go:293] postStartSetup for "functional-365100" (driver="docker")
	I0719 03:43:40.609409    3032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:43:40.623670    3032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:43:40.633959    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:40.802944    3032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
	I0719 03:43:40.945500    3032 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:43:40.955527    3032 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0719 03:43:40.955527    3032 command_runner.go:130] > NAME="Ubuntu"
	I0719 03:43:40.955527    3032 command_runner.go:130] > VERSION_ID="22.04"
	I0719 03:43:40.955527    3032 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0719 03:43:40.955527    3032 command_runner.go:130] > VERSION_CODENAME=jammy
	I0719 03:43:40.955527    3032 command_runner.go:130] > ID=ubuntu
	I0719 03:43:40.955527    3032 command_runner.go:130] > ID_LIKE=debian
	I0719 03:43:40.955527    3032 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0719 03:43:40.955527    3032 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0719 03:43:40.955527    3032 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0719 03:43:40.955527    3032 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0719 03:43:40.955527    3032 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0719 03:43:40.955527    3032 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0719 03:43:40.955527    3032 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0719 03:43:40.955527    3032 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0719 03:43:40.955527    3032 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0719 03:43:40.955527    3032 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0719 03:43:40.955527    3032 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0719 03:43:40.956504    3032 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem -> 109722.pem in /etc/ssl/certs
	I0719 03:43:40.956504    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem -> /etc/ssl/certs/109722.pem
	I0719 03:43:40.957501    3032 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10972\hosts -> hosts in /etc/test/nested/copy/10972
	I0719 03:43:40.957501    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10972\hosts -> /etc/test/nested/copy/10972/hosts
	I0719 03:43:40.968501    3032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10972
	I0719 03:43:40.986510    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem --> /etc/ssl/certs/109722.pem (1708 bytes)
	I0719 03:43:41.033749    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10972\hosts --> /etc/test/nested/copy/10972/hosts (40 bytes)
	I0719 03:43:41.079200    3032 start.go:296] duration metric: took 469.8496ms for postStartSetup
	I0719 03:43:41.093412    3032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:43:41.101162    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:41.272374    3032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
	I0719 03:43:41.390104    3032 command_runner.go:130] > 1%!
	(MISSING)I0719 03:43:41.404470    3032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0719 03:43:41.418706    3032 command_runner.go:130] > 951G
	I0719 03:43:41.418706    3032 fix.go:56] duration metric: took 4.2986527s for fixHost
	I0719 03:43:41.418706    3032 start.go:83] releasing machines lock for "functional-365100", held for 4.2986527s
	I0719 03:43:41.430657    3032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-365100
	I0719 03:43:41.619136    3032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 03:43:41.628123    3032 ssh_runner.go:195] Run: cat /version.json
	I0719 03:43:41.629129    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:41.638914    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:43:41.802385    3032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
	I0719 03:43:41.817381    3032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
	I0719 03:43:41.936852    3032 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 03:43:41.942456    3032 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721324606-19298", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	W0719 03:43:41.942456    3032 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 03:43:41.957819    3032 ssh_runner.go:195] Run: systemctl --version
	I0719 03:43:41.970829    3032 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0719 03:43:41.971582    3032 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0719 03:43:41.984191    3032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 03:43:41.997859    3032 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0719 03:43:41.997859    3032 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0719 03:43:41.997859    3032 command_runner.go:130] > Device: 91h/145d	Inode: 274         Links: 1
	I0719 03:43:41.997859    3032 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 03:43:41.997859    3032 command_runner.go:130] > Access: 2024-07-19 03:28:52.815463295 +0000
	I0719 03:43:41.997859    3032 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0719 03:43:41.997859    3032 command_runner.go:130] > Change: 2024-07-19 03:28:18.147792145 +0000
	I0719 03:43:41.997859    3032 command_runner.go:130] >  Birth: 2024-07-19 03:28:18.147792145 +0000
	I0719 03:43:42.011497    3032 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0719 03:43:42.027196    3032 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0719 03:43:42.029185    3032 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	W0719 03:43:42.040182    3032 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0719 03:43:42.040182    3032 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 03:43:42.041189    3032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:43:42.059181    3032 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 03:43:42.059181    3032 start.go:495] detecting cgroup driver to use...
	I0719 03:43:42.059181    3032 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 03:43:42.059181    3032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:43:42.090193    3032 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 03:43:42.103183    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 03:43:42.134208    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 03:43:42.154621    3032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 03:43:42.167583    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 03:43:42.205531    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:43:42.241095    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 03:43:42.280199    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:43:42.319461    3032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:43:42.353207    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 03:43:42.388089    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 03:43:42.425687    3032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 03:43:42.462709    3032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:43:42.484667    3032 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 03:43:42.498522    3032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:43:42.532820    3032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:43:42.713023    3032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 03:43:55.355404    3032 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (12.6422836s)
	I0719 03:43:55.355561    3032 start.go:495] detecting cgroup driver to use...
	I0719 03:43:55.355607    3032 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 03:43:55.371199    3032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 03:43:55.400923    3032 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0719 03:43:55.400923    3032 command_runner.go:130] > [Unit]
	I0719 03:43:55.400923    3032 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 03:43:55.400923    3032 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 03:43:55.400923    3032 command_runner.go:130] > BindsTo=containerd.service
	I0719 03:43:55.400923    3032 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0719 03:43:55.401480    3032 command_runner.go:130] > Wants=network-online.target
	I0719 03:43:55.401480    3032 command_runner.go:130] > Requires=docker.socket
	I0719 03:43:55.401480    3032 command_runner.go:130] > StartLimitBurst=3
	I0719 03:43:55.401564    3032 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 03:43:55.401607    3032 command_runner.go:130] > [Service]
	I0719 03:43:55.401607    3032 command_runner.go:130] > Type=notify
	I0719 03:43:55.401607    3032 command_runner.go:130] > Restart=on-failure
	I0719 03:43:55.401642    3032 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 03:43:55.401689    3032 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 03:43:55.401755    3032 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 03:43:55.401802    3032 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 03:43:55.401842    3032 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 03:43:55.401876    3032 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 03:43:55.401911    3032 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 03:43:55.401942    3032 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 03:43:55.402049    3032 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 03:43:55.402049    3032 command_runner.go:130] > ExecStart=
	I0719 03:43:55.402099    3032 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0719 03:43:55.402099    3032 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 03:43:55.402099    3032 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 03:43:55.402207    3032 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 03:43:55.402207    3032 command_runner.go:130] > LimitNOFILE=infinity
	I0719 03:43:55.402207    3032 command_runner.go:130] > LimitNPROC=infinity
	I0719 03:43:55.402207    3032 command_runner.go:130] > LimitCORE=infinity
	I0719 03:43:55.402207    3032 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 03:43:55.402292    3032 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 03:43:55.402330    3032 command_runner.go:130] > TasksMax=infinity
	I0719 03:43:55.402370    3032 command_runner.go:130] > TimeoutStartSec=0
	I0719 03:43:55.402404    3032 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 03:43:55.402404    3032 command_runner.go:130] > Delegate=yes
	I0719 03:43:55.402404    3032 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 03:43:55.402434    3032 command_runner.go:130] > KillMode=process
	I0719 03:43:55.402434    3032 command_runner.go:130] > [Install]
	I0719 03:43:55.402434    3032 command_runner.go:130] > WantedBy=multi-user.target
	I0719 03:43:55.402585    3032 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0719 03:43:55.416619    3032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:43:55.441954    3032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:43:55.474413    3032 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 03:43:55.490573    3032 ssh_runner.go:195] Run: which cri-dockerd
	I0719 03:43:55.503227    3032 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 03:43:55.522656    3032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 03:43:55.548856    3032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 03:43:55.607839    3032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 03:43:55.787357    3032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 03:43:55.902787    3032 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 03:43:55.903640    3032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 03:43:55.954330    3032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:43:56.120054    3032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 03:43:58.931223    3032 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8111479s)
	I0719 03:43:58.950962    3032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 03:43:58.997613    3032 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 03:43:59.055374    3032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 03:43:59.108585    3032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 03:43:59.281104    3032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 03:43:59.472128    3032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:43:59.603214    3032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 03:43:59.648165    3032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 03:43:59.685438    3032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:43:59.813182    3032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 03:43:59.973399    3032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 03:43:59.987300    3032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 03:44:00.001408    3032 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 03:44:00.001408    3032 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 03:44:00.001408    3032 command_runner.go:130] > Device: 9ah/154d	Inode: 720         Links: 1
	I0719 03:44:00.001408    3032 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0719 03:44:00.001408    3032 command_runner.go:130] > Access: 2024-07-19 03:43:59.926001103 +0000
	I0719 03:44:00.001408    3032 command_runner.go:130] > Modify: 2024-07-19 03:43:59.825989511 +0000
	I0719 03:44:00.001408    3032 command_runner.go:130] > Change: 2024-07-19 03:43:59.825989511 +0000
	I0719 03:44:00.001408    3032 command_runner.go:130] >  Birth: -
	I0719 03:44:00.001408    3032 start.go:563] Will wait 60s for crictl version
	I0719 03:44:00.014775    3032 ssh_runner.go:195] Run: which crictl
	I0719 03:44:00.023931    3032 command_runner.go:130] > /usr/bin/crictl
	I0719 03:44:00.036959    3032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 03:44:00.116250    3032 command_runner.go:130] > Version:  0.1.0
	I0719 03:44:00.116353    3032 command_runner.go:130] > RuntimeName:  docker
	I0719 03:44:00.116382    3032 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 03:44:00.116382    3032 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 03:44:00.116568    3032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 03:44:00.129789    3032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 03:44:00.315182    3032 command_runner.go:130] > 27.0.3
	I0719 03:44:00.328631    3032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 03:44:00.439318    3032 command_runner.go:130] > 27.0.3
	I0719 03:44:00.443795    3032 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 03:44:00.455236    3032 cli_runner.go:164] Run: docker exec -t functional-365100 dig +short host.docker.internal
	I0719 03:44:00.723584    3032 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0719 03:44:00.736927    3032 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0719 03:44:00.750498    3032 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0719 03:44:00.760159    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-365100
	I0719 03:44:00.924611    3032 kubeadm.go:883] updating cluster {Name:functional-365100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-365100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 03:44:00.924695    3032 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:44:00.935232    3032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 03:44:00.981636    3032 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 03:44:00.981636    3032 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 03:44:00.981636    3032 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 03:44:00.981636    3032 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 03:44:00.981636    3032 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 03:44:00.981636    3032 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 03:44:00.981636    3032 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 03:44:00.981636    3032 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 03:44:00.981636    3032 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 03:44:00.982161    3032 docker.go:615] Images already preloaded, skipping extraction
	I0719 03:44:00.992767    3032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 03:44:01.043172    3032 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 03:44:01.043172    3032 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 03:44:01.043172    3032 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 03:44:01.043172    3032 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 03:44:01.043172    3032 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 03:44:01.043172    3032 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 03:44:01.043172    3032 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 03:44:01.043172    3032 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 03:44:01.043172    3032 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 03:44:01.043172    3032 cache_images.go:84] Images are preloaded, skipping loading
	I0719 03:44:01.043700    3032 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.30.3 docker true true} ...
	I0719 03:44:01.043906    3032 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-365100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-365100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 03:44:01.054690    3032 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 03:44:01.156337    3032 command_runner.go:130] > cgroupfs
	I0719 03:44:01.156969    3032 cni.go:84] Creating CNI manager for ""
	I0719 03:44:01.157019    3032 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:44:01.157019    3032 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 03:44:01.157110    3032 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-365100 NodeName:functional-365100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 03:44:01.157334    3032 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-365100"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 03:44:01.171391    3032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 03:44:01.192758    3032 command_runner.go:130] > kubeadm
	I0719 03:44:01.192758    3032 command_runner.go:130] > kubectl
	I0719 03:44:01.192758    3032 command_runner.go:130] > kubelet
	I0719 03:44:01.192758    3032 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 03:44:01.206430    3032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 03:44:01.229075    3032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0719 03:44:01.266899    3032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 03:44:01.300964    3032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0719 03:44:01.353393    3032 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0719 03:44:01.368579    3032 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0719 03:44:01.378689    3032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:44:01.680289    3032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 03:44:01.707853    3032 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100 for IP: 192.168.49.2
	I0719 03:44:01.707853    3032 certs.go:194] generating shared ca certs ...
	I0719 03:44:01.707853    3032 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:44:01.708922    3032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0719 03:44:01.709239    3032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0719 03:44:01.709239    3032 certs.go:256] generating profile certs ...
	I0719 03:44:01.710275    3032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.key
	I0719 03:44:01.710275    3032 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\apiserver.key.229f5968
	I0719 03:44:01.710912    3032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\proxy-client.key
	I0719 03:44:01.710912    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 03:44:01.711082    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 03:44:01.711082    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 03:44:01.711082    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 03:44:01.711082    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 03:44:01.711082    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 03:44:01.711768    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 03:44:01.712003    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 03:44:01.712546    3032 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972.pem (1338 bytes)
	W0719 03:44:01.712898    3032 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972_empty.pem, impossibly tiny 0 bytes
	I0719 03:44:01.713001    3032 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 03:44:01.713413    3032 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0719 03:44:01.713721    3032 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 03:44:01.713985    3032 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 03:44:01.714407    3032 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem (1708 bytes)
	I0719 03:44:01.714407    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:44:01.715057    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972.pem -> /usr/share/ca-certificates/10972.pem
	I0719 03:44:01.715261    3032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem -> /usr/share/ca-certificates/109722.pem
	I0719 03:44:01.716208    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 03:44:01.764403    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 03:44:01.811719    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 03:44:01.858223    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 03:44:01.905518    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 03:44:01.952922    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 03:44:02.002407    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 03:44:02.051105    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 03:44:02.102560    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 03:44:02.152501    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972.pem --> /usr/share/ca-certificates/10972.pem (1338 bytes)
	I0719 03:44:02.198433    3032 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem --> /usr/share/ca-certificates/109722.pem (1708 bytes)
	I0719 03:44:02.248172    3032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 03:44:02.295655    3032 ssh_runner.go:195] Run: openssl version
	I0719 03:44:02.313466    3032 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0719 03:44:02.328805    3032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 03:44:02.363187    3032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:44:02.375381    3032 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:44:02.375381    3032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:44:02.389045    3032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:44:02.408455    3032 command_runner.go:130] > b5213941
	I0719 03:44:02.421410    3032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 03:44:02.459056    3032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10972.pem && ln -fs /usr/share/ca-certificates/10972.pem /etc/ssl/certs/10972.pem"
	I0719 03:44:02.494362    3032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10972.pem
	I0719 03:44:02.506686    3032 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 03:42 /usr/share/ca-certificates/10972.pem
	I0719 03:44:02.506686    3032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:42 /usr/share/ca-certificates/10972.pem
	I0719 03:44:02.520390    3032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10972.pem
	I0719 03:44:02.537317    3032 command_runner.go:130] > 51391683
	I0719 03:44:02.550520    3032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10972.pem /etc/ssl/certs/51391683.0"
	I0719 03:44:02.585807    3032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109722.pem && ln -fs /usr/share/ca-certificates/109722.pem /etc/ssl/certs/109722.pem"
	I0719 03:44:02.622461    3032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109722.pem
	I0719 03:44:02.637057    3032 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 03:42 /usr/share/ca-certificates/109722.pem
	I0719 03:44:02.637123    3032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:42 /usr/share/ca-certificates/109722.pem
	I0719 03:44:02.649633    3032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109722.pem
	I0719 03:44:02.669020    3032 command_runner.go:130] > 3ec20f2e
	I0719 03:44:02.681382    3032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109722.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 03:44:02.716151    3032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 03:44:02.729472    3032 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 03:44:02.729472    3032 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 03:44:02.729472    3032 command_runner.go:130] > Device: 830h/2096d	Inode: 19306       Links: 1
	I0719 03:44:02.729472    3032 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 03:44:02.729472    3032 command_runner.go:130] > Access: 2024-07-19 03:43:01.442271330 +0000
	I0719 03:44:02.729472    3032 command_runner.go:130] > Modify: 2024-07-19 03:43:01.442271330 +0000
	I0719 03:44:02.729472    3032 command_runner.go:130] > Change: 2024-07-19 03:43:01.442271330 +0000
	I0719 03:44:02.729472    3032 command_runner.go:130] >  Birth: 2024-07-19 03:43:01.442271330 +0000
	I0719 03:44:02.743944    3032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 03:44:02.760397    3032 command_runner.go:130] > Certificate will not expire
	I0719 03:44:02.773482    3032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 03:44:02.789551    3032 command_runner.go:130] > Certificate will not expire
	I0719 03:44:02.801857    3032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 03:44:02.818060    3032 command_runner.go:130] > Certificate will not expire
	I0719 03:44:02.830354    3032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 03:44:02.846241    3032 command_runner.go:130] > Certificate will not expire
	I0719 03:44:02.859825    3032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 03:44:02.875224    3032 command_runner.go:130] > Certificate will not expire
	I0719 03:44:02.888260    3032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 03:44:02.905778    3032 command_runner.go:130] > Certificate will not expire
	I0719 03:44:02.906751    3032 kubeadm.go:392] StartCluster: {Name:functional-365100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-365100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:44:02.917649    3032 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 03:44:02.976518    3032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 03:44:02.998091    3032 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0719 03:44:02.998091    3032 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0719 03:44:02.998091    3032 command_runner.go:130] > /var/lib/minikube/etcd:
	I0719 03:44:02.998091    3032 command_runner.go:130] > member
	I0719 03:44:02.998091    3032 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 03:44:02.998091    3032 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 03:44:03.010646    3032 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 03:44:03.032284    3032 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 03:44:03.043571    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-365100
	I0719 03:44:03.227619    3032 kubeconfig.go:125] found "functional-365100" server: "https://127.0.0.1:63174"
	I0719 03:44:03.228554    3032 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:44:03.229596    3032 kapi.go:59] client config for functional-365100: &rest.Config{Host:"https://127.0.0.1:63174", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-365100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-365100\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28d5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 03:44:03.230944    3032 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 03:44:03.243377    3032 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 03:44:03.267616    3032 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0719 03:44:03.267616    3032 kubeadm.go:597] duration metric: took 269.5231ms to restartPrimaryControlPlane
	I0719 03:44:03.267616    3032 kubeadm.go:394] duration metric: took 360.8978ms to StartCluster
	I0719 03:44:03.267616    3032 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:44:03.268555    3032 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:44:03.268555    3032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:44:03.270359    3032 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 03:44:03.272824    3032 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 03:44:03.272824    3032 addons.go:69] Setting storage-provisioner=true in profile "functional-365100"
	I0719 03:44:03.272971    3032 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:44:03.272971    3032 addons.go:234] Setting addon storage-provisioner=true in "functional-365100"
	W0719 03:44:03.273273    3032 addons.go:243] addon storage-provisioner should already be in state true
	I0719 03:44:03.273458    3032 host.go:66] Checking if "functional-365100" exists ...
	I0719 03:44:03.272971    3032 addons.go:69] Setting default-storageclass=true in profile "functional-365100"
	I0719 03:44:03.273492    3032 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-365100"
	I0719 03:44:03.277375    3032 out.go:177] * Verifying Kubernetes components...
	I0719 03:44:03.300315    3032 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
	I0719 03:44:03.301860    3032 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
	I0719 03:44:03.307083    3032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:44:03.507032    3032 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:44:03.507788    3032 kapi.go:59] client config for functional-365100: &rest.Config{Host:"https://127.0.0.1:63174", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-365100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-365100\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28d5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 03:44:03.509194    3032 addons.go:234] Setting addon default-storageclass=true in "functional-365100"
	W0719 03:44:03.509288    3032 addons.go:243] addon default-storageclass should already be in state true
	I0719 03:44:03.509352    3032 host.go:66] Checking if "functional-365100" exists ...
	I0719 03:44:03.525567    3032 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 03:44:03.529019    3032 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:44:03.529019    3032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 03:44:03.536388    3032 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
	I0719 03:44:03.539853    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:44:03.554552    3032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 03:44:03.596561    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-365100
	I0719 03:44:03.739467    3032 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 03:44:03.739467    3032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 03:44:03.753468    3032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
	I0719 03:44:03.754479    3032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
	I0719 03:44:03.806059    3032 node_ready.go:35] waiting up to 6m0s for node "functional-365100" to be "Ready" ...
	I0719 03:44:03.806587    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:03.806754    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:03.806868    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:03.806868    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:03.812771    3032 round_trippers.go:574] Response Status:  in 5 milliseconds
	I0719 03:44:03.812771    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:03.940939    3032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
	I0719 03:44:04.520708    3032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:44:04.721734    3032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 03:44:04.827815    3032 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:04.827815    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:04.827815    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:04.827815    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:04.827815    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:04.831261    3032 round_trippers.go:574] Response Status:  in 3 milliseconds
	I0719 03:44:04.831261    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:05.218885    3032 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 03:44:05.300171    3032 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:05.300269    3032 retry.go:31] will retry after 285.278604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:05.338890    3032 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 03:44:05.338890    3032 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:05.338890    3032 retry.go:31] will retry after 136.744035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:05.492366    3032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0719 03:44:05.605688    3032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:44:05.838128    3032 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:05.838128    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:05.838128    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:05.838128    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:05.838128    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:05.844167    3032 round_trippers.go:574] Response Status:  in 6 milliseconds
	I0719 03:44:05.844167    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:05.996807    3032 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 03:44:06.004197    3032 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:06.004777    3032 retry.go:31] will retry after 446.947538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:06.116093    3032 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 03:44:06.204594    3032 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:06.204594    3032 retry.go:31] will retry after 212.932829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:06.446995    3032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:44:06.476958    3032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0719 03:44:06.855501    3032 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:06.855501    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:06.855501    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:06.855501    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:06.855501    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:06.859082    3032 round_trippers.go:574] Response Status:  in 3 milliseconds
	I0719 03:44:06.859171    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:07.112241    3032 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:07.200349    3032 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 03:44:07.200516    3032 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 03:44:07.200349    3032 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:07.200516    3032 retry.go:31] will retry after 788.049343ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:07.200516    3032 retry.go:31] will retry after 389.14241ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 03:44:07.607796    3032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:44:07.871213    3032 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:07.871213    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:07.871213    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:07.871213    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:07.871213    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:08.010962    3032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0719 03:44:12.101209    3032 round_trippers.go:574] Response Status: 200 OK in 4229 milliseconds
	I0719 03:44:12.101209    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:12.101209    3032 round_trippers.go:580]     Audit-Id: eda406f2-b05a-4a4d-8d8d-bb9567ef9463
	I0719 03:44:12.101209    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:12.101209    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:12.101658    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0719 03:44:12.101712    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0719 03:44:12.101712    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:12 GMT
	I0719 03:44:12.102193    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:12.103545    3032 node_ready.go:49] node "functional-365100" has status "Ready":"True"
	I0719 03:44:12.104164    3032 node_ready.go:38] duration metric: took 8.2979192s for node "functional-365100" to be "Ready" ...
	I0719 03:44:12.104229    3032 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 03:44:12.104442    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods
	I0719 03:44:12.104544    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:12.104544    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:12.104626    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:12.205677    3032 round_trippers.go:574] Response Status: 200 OK in 101 milliseconds
	I0719 03:44:12.205794    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:12.205794    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:12.205794    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:12.205794    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0719 03:44:12.205794    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0719 03:44:12.205794    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:12 GMT
	I0719 03:44:12.205794    3032 round_trippers.go:580]     Audit-Id: f5206288-c3f8-4d05-b659-54f072eeb5ec
	I0719 03:44:12.304910    3032 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-j8zns","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"adcc4338-35fa-4b8d-a2b8-d768306b7701","resourceVersion":"386","creationTimestamp":"2024-07-19T03:43:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"be30cc86-1696-4615-ae2f-ee1803ac64c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be30cc86-1696-4615-ae2f-ee1803ac64c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50306 chars]
	I0719 03:44:12.312571    3032 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-j8zns" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:12.312571    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-j8zns
	I0719 03:44:12.312571    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:12.312571    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:12.312571    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:12.502646    3032 round_trippers.go:574] Response Status: 200 OK in 190 milliseconds
	I0719 03:44:12.502789    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:12.502789    3032 round_trippers.go:580]     Audit-Id: dd57c567-a16a-48a2-ace7-1cf37d3ce303
	I0719 03:44:12.502789    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:12.502789    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:12.502789    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0719 03:44:12.502883    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0719 03:44:12.502883    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:12 GMT
	I0719 03:44:12.503173    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-j8zns","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"adcc4338-35fa-4b8d-a2b8-d768306b7701","resourceVersion":"386","creationTimestamp":"2024-07-19T03:43:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"be30cc86-1696-4615-ae2f-ee1803ac64c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be30cc86-1696-4615-ae2f-ee1803ac64c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6239 chars]
	I0719 03:44:12.504427    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:12.504427    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:12.504427    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:12.504427    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:12.521228    3032 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0719 03:44:12.521228    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:12.521228    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:12.521228    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:12.521228    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:12.521228    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:12.521228    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:12 GMT
	I0719 03:44:12.521228    3032 round_trippers.go:580]     Audit-Id: 99558627-08e2-404b-bb01-ae023937729b
	I0719 03:44:12.521789    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:12.522604    3032 pod_ready.go:92] pod "coredns-7db6d8ff4d-j8zns" in "kube-system" namespace has status "Ready":"True"
	I0719 03:44:12.522717    3032 pod_ready.go:81] duration metric: took 210.1439ms for pod "coredns-7db6d8ff4d-j8zns" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:12.522764    3032 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-365100" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:12.522959    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/etcd-functional-365100
	I0719 03:44:12.523009    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:12.523073    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:12.523073    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:12.598842    3032 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0719 03:44:12.598842    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:12.598974    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:12.598974    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:12 GMT
	I0719 03:44:12.598974    3032 round_trippers.go:580]     Audit-Id: fadb87b2-104f-4c88-868c-eed2e06ad395
	I0719 03:44:12.598974    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:12.598974    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:12.598974    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:12.599317    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-365100","namespace":"kube-system","uid":"80f2b059-2dd0-41b0-9ed6-0c24ec06ad28","resourceVersion":"274","creationTimestamp":"2024-07-19T03:43:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"8bbeeb99916289f65f9a02d3d2f22a97","kubernetes.io/config.mirror":"8bbeeb99916289f65f9a02d3d2f22a97","kubernetes.io/config.seen":"2024-07-19T03:43:14.806291769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6153 chars]
	I0719 03:44:12.600116    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:12.600162    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:12.600162    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:12.600162    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:12.615156    3032 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0719 03:44:12.615156    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:12.615156    3032 round_trippers.go:580]     Audit-Id: b3b11688-f58f-40cc-b7c3-9edde7c03251
	I0719 03:44:12.615156    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:12.615156    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:12.615156    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:12.615156    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:12.615156    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:12 GMT
	I0719 03:44:12.616269    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:12.617287    3032 pod_ready.go:92] pod "etcd-functional-365100" in "kube-system" namespace has status "Ready":"True"
	I0719 03:44:12.617336    3032 pod_ready.go:81] duration metric: took 94.5228ms for pod "etcd-functional-365100" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:12.617336    3032 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-365100" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:12.617497    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:12.617545    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:12.617545    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:12.617545    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:12.622212    3032 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 03:44:12.622988    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:12.622988    3032 round_trippers.go:580]     Audit-Id: f13de56d-612c-4fb5-846d-368bf7e2f204
	I0719 03:44:12.622988    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:12.622988    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:12.622988    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:12.622988    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:12.622988    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:12 GMT
	I0719 03:44:12.622988    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"414","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8941 chars]
	I0719 03:44:12.622988    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:12.622988    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:12.622988    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:12.622988    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:12.699091    3032 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I0719 03:44:12.699091    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:12.699091    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:12.699091    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:12 GMT
	I0719 03:44:12.699091    3032 round_trippers.go:580]     Audit-Id: 176da26b-5c9e-4126-9508-136312da2c92
	I0719 03:44:12.699091    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:12.699207    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:12.699207    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:12.699438    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:13.131981    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:13.131981    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:13.131981    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:13.131981    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:13.216768    3032 round_trippers.go:574] Response Status: 200 OK in 84 milliseconds
	I0719 03:44:13.216768    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:13.216768    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:13.216768    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:13 GMT
	I0719 03:44:13.216768    3032 round_trippers.go:580]     Audit-Id: c279311c-3db7-47e2-b627-76b482b1be8f
	I0719 03:44:13.216768    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:13.216768    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:13.216768    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:13.216768    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"414","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8941 chars]
	I0719 03:44:13.218488    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:13.218684    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:13.218684    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:13.218684    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:13.227904    3032 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 03:44:13.227904    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:13.227904    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:13.227904    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:13.227904    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:13.227904    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:13.227904    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:13 GMT
	I0719 03:44:13.227904    3032 round_trippers.go:580]     Audit-Id: 2f3ef40d-5404-4582-b712-37f5d60bdf2b
	I0719 03:44:13.227904    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:13.619898    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:13.619898    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:13.619898    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:13.619898    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:13.638128    3032 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0719 03:44:13.638222    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:13.638222    3032 round_trippers.go:580]     Audit-Id: 12aec9bf-bc88-4dfb-becc-a66ad826fffe
	I0719 03:44:13.638222    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:13.638222    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:13.638222    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:13.638222    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:13.638222    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:13 GMT
	I0719 03:44:13.638509    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:13.639346    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:13.639346    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:13.639346    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:13.639346    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:13.696263    3032 round_trippers.go:574] Response Status: 200 OK in 56 milliseconds
	I0719 03:44:13.696314    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:13.696314    3032 round_trippers.go:580]     Audit-Id: 93f19213-2ccb-4d2f-b106-326f71191bc3
	I0719 03:44:13.696314    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:13.696314    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:13.696314    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:13.696419    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:13.696419    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:13 GMT
	I0719 03:44:13.696671    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:14.129129    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:14.129129    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:14.129129    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:14.129129    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:14.138016    3032 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 03:44:14.138016    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:14.138016    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:14.138016    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:14 GMT
	I0719 03:44:14.138016    3032 round_trippers.go:580]     Audit-Id: 3be3e25e-6c39-430d-ae61-b7585b7a12a5
	I0719 03:44:14.138016    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:14.138016    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:14.138016    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:14.138622    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:14.140061    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:14.140104    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:14.140104    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:14.140104    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:14.198049    3032 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0719 03:44:14.198049    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:14.198049    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:14.198049    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:14.198049    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:14 GMT
	I0719 03:44:14.198152    3032 round_trippers.go:580]     Audit-Id: 3d523827-99bf-49d4-a09f-5c2f8ddfa68f
	I0719 03:44:14.198152    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:14.198245    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:14.198556    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:14.217834    3032 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0719 03:44:14.217834    3032 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0719 03:44:14.217834    3032 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0719 03:44:14.217954    3032 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0719 03:44:14.217954    3032 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0719 03:44:14.217954    3032 command_runner.go:130] > pod/storage-provisioner configured
	I0719 03:44:14.217954    3032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.6101074s)
	I0719 03:44:14.218068    3032 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0719 03:44:14.218208    3032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.2071985s)
	I0719 03:44:14.218402    3032 round_trippers.go:463] GET https://127.0.0.1:63174/apis/storage.k8s.io/v1/storageclasses
	I0719 03:44:14.218502    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:14.218502    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:14.218502    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:14.233210    3032 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0719 03:44:14.233210    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:14.233210    3032 round_trippers.go:580]     Content-Length: 1273
	I0719 03:44:14.233210    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:14 GMT
	I0719 03:44:14.233210    3032 round_trippers.go:580]     Audit-Id: 86b5352b-7225-49ba-9528-ad8a97cce34c
	I0719 03:44:14.233210    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:14.233210    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:14.233210    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:14.233210    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:14.233210    3032 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"479"},"items":[{"metadata":{"name":"standard","uid":"1aa43efe-ce32-467d-843c-4ca491111151","resourceVersion":"346","creationTimestamp":"2024-07-19T03:43:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-19T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0719 03:44:14.236820    3032 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"1aa43efe-ce32-467d-843c-4ca491111151","resourceVersion":"346","creationTimestamp":"2024-07-19T03:43:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-19T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0719 03:44:14.237696    3032 round_trippers.go:463] PUT https://127.0.0.1:63174/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 03:44:14.241114    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:14.241114    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:14.241114    3032 round_trippers.go:473]     Content-Type: application/json
	I0719 03:44:14.241114    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:14.247476    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:14.247476    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:14.247476    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:14.247476    3032 round_trippers.go:580]     Content-Length: 1220
	I0719 03:44:14.247476    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:14 GMT
	I0719 03:44:14.247476    3032 round_trippers.go:580]     Audit-Id: a1076b3d-210b-4dc3-a830-d9ca70683f65
	I0719 03:44:14.247476    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:14.247476    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:14.247476    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:14.247476    3032 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"1aa43efe-ce32-467d-843c-4ca491111151","resourceVersion":"346","creationTimestamp":"2024-07-19T03:43:29Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-19T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0719 03:44:14.252492    3032 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 03:44:14.255470    3032 addons.go:510] duration metric: took 10.9826627s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 03:44:14.619178    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:14.619219    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:14.619219    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:14.619219    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:14.625252    3032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 03:44:14.625252    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:14.625252    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:14.625252    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:14.625252    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:14.625252    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:14.625252    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:14 GMT
	I0719 03:44:14.625252    3032 round_trippers.go:580]     Audit-Id: 0f018ca9-d258-4587-be24-96d28e0b120f
	I0719 03:44:14.625252    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:14.626216    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:14.626216    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:14.626216    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:14.626216    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:14.632327    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:14.632327    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:14.632327    3032 round_trippers.go:580]     Audit-Id: 3f8dd5c8-fb1a-45a9-b607-aeaf7e7db2c8
	I0719 03:44:14.632327    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:14.632327    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:14.632327    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:14.632327    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:14.632327    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:14 GMT
	I0719 03:44:14.632327    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:14.633054    3032 pod_ready.go:102] pod "kube-apiserver-functional-365100" in "kube-system" namespace has status "Ready":"False"
	I0719 03:44:15.131921    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:15.131921    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:15.131921    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:15.131921    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:15.138029    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:15.138029    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:15.138029    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:15.138029    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:15.138029    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:15 GMT
	I0719 03:44:15.138029    3032 round_trippers.go:580]     Audit-Id: e519378e-2ddd-4e07-8350-17e8be2017b1
	I0719 03:44:15.138029    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:15.138029    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:15.139415    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:15.139736    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:15.139736    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:15.139736    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:15.139736    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:15.200685    3032 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0719 03:44:15.200685    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:15.200685    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:15.200685    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:15.200685    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:15 GMT
	I0719 03:44:15.200685    3032 round_trippers.go:580]     Audit-Id: dab6910d-e443-4354-9a46-ceda641eff18
	I0719 03:44:15.200685    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:15.200685    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:15.202072    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:15.631890    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:15.632085    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:15.632085    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:15.632085    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:15.638834    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:15.638834    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:15.638834    3032 round_trippers.go:580]     Audit-Id: 9e45c077-77cc-4534-a2ee-3b1cacb82cf4
	I0719 03:44:15.638834    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:15.638834    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:15.638834    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:15.638911    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:15.638911    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:15 GMT
	I0719 03:44:15.639265    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:15.640198    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:15.640198    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:15.640198    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:15.640198    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:15.645144    3032 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 03:44:15.645144    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:15.645144    3032 round_trippers.go:580]     Audit-Id: 14e8e817-b916-4481-b85f-4ba68e6ca85b
	I0719 03:44:15.645144    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:15.645144    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:15.645144    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:15.645694    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:15.645694    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:15 GMT
	I0719 03:44:15.646153    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:16.118284    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:16.118358    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:16.118417    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:16.118417    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:16.125097    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:16.125097    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:16.125097    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:16.125097    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:16.125097    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:16.125097    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:16.125097    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:16 GMT
	I0719 03:44:16.125097    3032 round_trippers.go:580]     Audit-Id: 9b6565b8-d3e9-4891-9383-0664cb7f2f4c
	I0719 03:44:16.125537    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:16.126539    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:16.126623    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:16.126649    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:16.126649    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:16.132342    3032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 03:44:16.132425    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:16.132425    3032 round_trippers.go:580]     Audit-Id: 1ebb27e5-b9a8-4dfe-89f9-b61179e32eb0
	I0719 03:44:16.132504    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:16.132504    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:16.132504    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:16.132547    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:16.132547    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:16 GMT
	I0719 03:44:16.132679    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:16.621328    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:16.621441    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:16.621441    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:16.621441    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:16.629273    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:16.629356    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:16.629401    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:16 GMT
	I0719 03:44:16.629433    3032 round_trippers.go:580]     Audit-Id: d1ef6149-0e16-409e-8193-2f1bbc507947
	I0719 03:44:16.629458    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:16.629458    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:16.629458    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:16.629458    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:16.629677    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:16.630378    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:16.630378    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:16.630378    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:16.630378    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:16.637818    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:16.637818    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:16.637818    3032 round_trippers.go:580]     Audit-Id: d81a140c-15cc-4671-9260-dc132f148434
	I0719 03:44:16.637818    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:16.637818    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:16.637818    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:16.637818    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:16.637818    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:16 GMT
	I0719 03:44:16.638355    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:16.638656    3032 pod_ready.go:102] pod "kube-apiserver-functional-365100" in "kube-system" namespace has status "Ready":"False"
	I0719 03:44:17.124931    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:17.125293    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:17.125293    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:17.125293    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:17.132151    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:17.132151    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:17.132151    3032 round_trippers.go:580]     Audit-Id: 29404379-121a-47ec-971e-a8dc8e79fa9e
	I0719 03:44:17.132151    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:17.132151    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:17.132151    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:17.132151    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:17.132151    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:17 GMT
	I0719 03:44:17.132864    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:17.133712    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:17.133738    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:17.133738    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:17.133738    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:17.140036    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:17.140096    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:17.140096    3032 round_trippers.go:580]     Audit-Id: 78c6921c-e95e-4b69-a23a-b31a06aa22d8
	I0719 03:44:17.140096    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:17.140096    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:17.140141    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:17.140141    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:17.140141    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:17 GMT
	I0719 03:44:17.140141    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:17.627116    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:17.627180    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:17.627180    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:17.627238    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:17.634322    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:17.634444    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:17.634444    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:17.634444    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:17.634444    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:17.634560    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:17.634560    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:17 GMT
	I0719 03:44:17.634612    3032 round_trippers.go:580]     Audit-Id: 670ae148-0a79-4108-8812-fc94aceeea1c
	I0719 03:44:17.634925    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:17.636348    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:17.636458    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:17.636458    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:17.636458    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:17.642100    3032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 03:44:17.642100    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:17.642100    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:17.642100    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:17.642100    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:17.642100    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:17 GMT
	I0719 03:44:17.642100    3032 round_trippers.go:580]     Audit-Id: d74f5274-87c7-4130-bc36-416e75bcbb4c
	I0719 03:44:17.642100    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:17.642100    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:18.118243    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:18.118243    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:18.118243    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:18.118243    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:18.125721    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:18.125721    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:18.125721    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:18.125721    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:18 GMT
	I0719 03:44:18.125721    3032 round_trippers.go:580]     Audit-Id: 8f644874-c6ca-4037-9011-828ae40eac97
	I0719 03:44:18.125721    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:18.125721    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:18.125721    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:18.126450    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:18.127058    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:18.127058    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:18.127058    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:18.127058    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:18.133137    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:18.133203    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:18.133203    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:18 GMT
	I0719 03:44:18.133203    3032 round_trippers.go:580]     Audit-Id: 40a03a1f-63ec-4160-804a-52402b70c082
	I0719 03:44:18.133203    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:18.133203    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:18.133203    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:18.133203    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:18.133519    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:18.623056    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:18.623056    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:18.623056    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:18.623056    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:18.632428    3032 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 03:44:18.632486    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:18.632516    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:18.632516    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:18 GMT
	I0719 03:44:18.632516    3032 round_trippers.go:580]     Audit-Id: 930a4638-1393-4fee-9d1c-8f4d2370e8da
	I0719 03:44:18.632516    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:18.632553    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:18.632553    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:18.632853    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:18.633468    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:18.633468    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:18.633468    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:18.633468    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:18.639296    3032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 03:44:18.639296    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:18.639296    3032 round_trippers.go:580]     Audit-Id: 312d1f08-ee0a-4349-b597-c010c323f7c5
	I0719 03:44:18.639829    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:18.639829    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:18.639829    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:18.639829    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:18.639829    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:18 GMT
	I0719 03:44:18.639957    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:18.639957    3032 pod_ready.go:102] pod "kube-apiserver-functional-365100" in "kube-system" namespace has status "Ready":"False"
	I0719 03:44:19.124521    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:19.124521    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:19.124848    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:19.124848    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:19.130818    3032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 03:44:19.130818    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:19.130818    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:19.130879    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:19 GMT
	I0719 03:44:19.130879    3032 round_trippers.go:580]     Audit-Id: 1799a757-e9da-49ac-bad8-1f63c64875e7
	I0719 03:44:19.130879    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:19.130879    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:19.130879    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:19.131658    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:19.132524    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:19.132610    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:19.132610    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:19.132610    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:19.139041    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:19.139135    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:19.139135    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:19.139135    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:19 GMT
	I0719 03:44:19.139135    3032 round_trippers.go:580]     Audit-Id: f04450a3-681b-4d33-a529-6a398c90ca5c
	I0719 03:44:19.139135    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:19.139135    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:19.139228    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:19.139431    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:19.627515    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:19.627515    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:19.627515    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:19.627515    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:19.634536    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:19.634579    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:19.634579    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:19.634579    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:19.634579    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:19.634579    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:19 GMT
	I0719 03:44:19.634579    3032 round_trippers.go:580]     Audit-Id: cebe6d04-00da-479b-95f6-736509b5c807
	I0719 03:44:19.634664    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:19.635846    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:19.636063    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:19.636063    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:19.636063    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:19.636063    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:19.642713    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:19.642713    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:19.642713    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:19.642713    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:19.642713    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:19.642713    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:19.642713    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:19 GMT
	I0719 03:44:19.642713    3032 round_trippers.go:580]     Audit-Id: d1d0d119-5547-4e6d-8e39-354357bf0c2f
	I0719 03:44:19.643273    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:20.127199    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:20.127199    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:20.127199    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:20.127199    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:20.133577    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:20.133577    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:20.133577    3032 round_trippers.go:580]     Audit-Id: b80be2ae-2e30-4aff-a643-888a5f4a3e27
	I0719 03:44:20.133577    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:20.133577    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:20.133577    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:20.133577    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:20.133577    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:20 GMT
	I0719 03:44:20.134248    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:20.135254    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:20.135254    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:20.135254    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:20.135254    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:20.142821    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:20.142821    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:20.142821    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:20.142821    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:20.142821    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:20.142821    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:20.142821    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:20 GMT
	I0719 03:44:20.142821    3032 round_trippers.go:580]     Audit-Id: d2c3568b-0a24-4bb8-b5e1-b9aa070b5843
	I0719 03:44:20.143466    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:20.628338    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:20.628414    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:20.628414    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:20.628414    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:20.635414    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:20.635495    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:20.635495    3032 round_trippers.go:580]     Audit-Id: 16542361-8696-4817-b3e8-121587f67909
	I0719 03:44:20.635553    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:20.635553    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:20.635553    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:20.635553    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:20.635581    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:20 GMT
	I0719 03:44:20.635869    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:20.636702    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:20.636702    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:20.636702    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:20.636799    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:20.643289    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:20.643289    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:20.643289    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:20 GMT
	I0719 03:44:20.643289    3032 round_trippers.go:580]     Audit-Id: c311d851-7fdb-42e2-9d37-bb28fc281cab
	I0719 03:44:20.643289    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:20.643289    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:20.643289    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:20.643289    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:20.643978    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:20.643978    3032 pod_ready.go:102] pod "kube-apiserver-functional-365100" in "kube-system" namespace has status "Ready":"False"
	I0719 03:44:21.128626    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:21.128703    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:21.128703    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:21.128703    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:21.134158    3032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 03:44:21.134199    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:21.134199    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:21.134199    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:21.134199    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:21 GMT
	I0719 03:44:21.134199    3032 round_trippers.go:580]     Audit-Id: 86470d75-dc9c-487b-bcf4-963b9daecc34
	I0719 03:44:21.134199    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:21.134199    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:21.134923    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:21.135676    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:21.135676    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:21.135676    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:21.135676    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:21.141254    3032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 03:44:21.141254    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:21.141254    3032 round_trippers.go:580]     Audit-Id: e8c6fdf2-62ac-4529-882e-c5d403366c86
	I0719 03:44:21.141254    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:21.141254    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:21.141254    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:21.141254    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:21.141254    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:21 GMT
	I0719 03:44:21.141922    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:21.619786    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:21.619861    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:21.619861    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:21.619934    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:21.626332    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:21.626332    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:21.626332    3032 round_trippers.go:580]     Audit-Id: dec76aaa-ac09-46fd-8ddc-8f66c9ac5f1b
	I0719 03:44:21.626332    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:21.626332    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:21.626332    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:21.626332    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:21.626332    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:21 GMT
	I0719 03:44:21.626891    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"443","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0719 03:44:21.627828    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:21.627878    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:21.627878    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:21.627878    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:21.633996    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:21.633996    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:21.633996    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:21.633996    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:21.633996    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:21.633996    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:21 GMT
	I0719 03:44:21.633996    3032 round_trippers.go:580]     Audit-Id: 6d856d3c-8053-4142-a638-f181cbea4928
	I0719 03:44:21.633996    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:21.634660    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:22.121696    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:22.121808    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.121808    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.121808    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.127923    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:22.127923    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.127923    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.127923    3032 round_trippers.go:580]     Audit-Id: 254efaf2-09a8-465e-96a0-4624d4a402b6
	I0719 03:44:22.127923    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.127923    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.127923    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.127923    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.127923    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"494","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8985 chars]
	I0719 03:44:22.128806    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:22.129347    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.129347    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.129347    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.136718    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:22.136810    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.136810    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.136865    3032 round_trippers.go:580]     Audit-Id: 7f37aede-6878-408a-b962-9c7a7ea5abe9
	I0719 03:44:22.136865    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.136865    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.136865    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.136865    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.136865    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:22.622679    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100
	I0719 03:44:22.622679    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.622679    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.622679    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.630172    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:22.630172    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.630172    3032 round_trippers.go:580]     Audit-Id: 9376025f-392a-4739-83a5-be2e61d58f6b
	I0719 03:44:22.630172    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.630172    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.630172    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.630172    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.630172    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.630172    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-365100","namespace":"kube-system","uid":"9a8dbf29-88ca-4e14-87e5-83299d40791b","resourceVersion":"495","creationTimestamp":"2024-07-19T03:43:12Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.mirror":"ce1d47ed07ecac718d5cc31bba966dcf","kubernetes.io/config.seen":"2024-07-19T03:43:05.820416011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8742 chars]
	I0719 03:44:22.631124    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:22.631124    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.631243    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.631243    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.637315    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:22.637315    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.637315    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.637315    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.637315    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.637315    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.637315    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.637315    3032 round_trippers.go:580]     Audit-Id: f9092032-17f6-4917-9e17-570e0c471cd1
	I0719 03:44:22.637315    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:22.638460    3032 pod_ready.go:92] pod "kube-apiserver-functional-365100" in "kube-system" namespace has status "Ready":"True"
	I0719 03:44:22.638519    3032 pod_ready.go:81] duration metric: took 10.02105s for pod "kube-apiserver-functional-365100" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:22.638593    3032 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-365100" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:22.638593    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-365100
	I0719 03:44:22.638593    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.638593    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.638593    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.642977    3032 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 03:44:22.643730    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.643730    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.643730    3032 round_trippers.go:580]     Audit-Id: 930b5d37-c430-4c5c-8a5f-dea5b29fa2f9
	I0719 03:44:22.643730    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.643730    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.643730    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.643730    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.644133    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-365100","namespace":"kube-system","uid":"ca6aef72-cc5a-45ad-85ef-c3cbc7d70344","resourceVersion":"491","creationTimestamp":"2024-07-19T03:43:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ce029d3dbb37dee230fead0468e21627","kubernetes.io/config.mirror":"ce029d3dbb37dee230fead0468e21627","kubernetes.io/config.seen":"2024-07-19T03:43:14.806301170Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8315 chars]
	I0719 03:44:22.644184    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:22.644184    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.644755    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.644755    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.649013    3032 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 03:44:22.649643    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.649643    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.649643    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.649688    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.649688    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.649688    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.649688    3032 round_trippers.go:580]     Audit-Id: 76e14778-37c6-43c2-a88c-adbb5fd749aa
	I0719 03:44:22.649934    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:22.650345    3032 pod_ready.go:92] pod "kube-controller-manager-functional-365100" in "kube-system" namespace has status "Ready":"True"
	I0719 03:44:22.650345    3032 pod_ready.go:81] duration metric: took 11.7525ms for pod "kube-controller-manager-functional-365100" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:22.650408    3032 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w5wzv" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:22.650494    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-proxy-w5wzv
	I0719 03:44:22.650494    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.650494    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.650494    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.658444    3032 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 03:44:22.658444    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.658444    3032 round_trippers.go:580]     Audit-Id: 426cedee-72e9-4048-89c7-d9738de124c2
	I0719 03:44:22.658444    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.658444    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.658444    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.658444    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.658444    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.658444    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w5wzv","generateName":"kube-proxy-","namespace":"kube-system","uid":"599062db-62d6-4f6e-9b1b-29f4709020f1","resourceVersion":"448","creationTimestamp":"2024-07-19T03:43:28Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"09c8ef76-cc0e-46c6-b83f-5f63bc416e27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"09c8ef76-cc0e-46c6-b83f-5f63bc416e27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6030 chars]
	I0719 03:44:22.659610    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:22.659610    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.659669    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.659669    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.663860    3032 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 03:44:22.663860    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.663860    3032 round_trippers.go:580]     Audit-Id: 95c5dfda-36b9-450d-943e-66c44607f23a
	I0719 03:44:22.663860    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.663860    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.663860    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.663860    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.663860    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.663860    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:22.664636    3032 pod_ready.go:92] pod "kube-proxy-w5wzv" in "kube-system" namespace has status "Ready":"True"
	I0719 03:44:22.664673    3032 pod_ready.go:81] duration metric: took 14.2644ms for pod "kube-proxy-w5wzv" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:22.664673    3032 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-365100" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:22.664846    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-365100
	I0719 03:44:22.664846    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.664846    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.664846    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.669112    3032 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 03:44:22.669112    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.669112    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.669112    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.669112    3032 round_trippers.go:580]     Audit-Id: 857d1b45-6350-4189-933d-dba10545fee2
	I0719 03:44:22.669112    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.669112    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.669112    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.669112    3032 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-365100","namespace":"kube-system","uid":"4152f9c8-f230-4c69-83a8-c74692cd4e0d","resourceVersion":"488","creationTimestamp":"2024-07-19T03:43:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"45a4ebe0c243c0225d03aa1609c41e3d","kubernetes.io/config.mirror":"45a4ebe0c243c0225d03aa1609c41e3d","kubernetes.io/config.seen":"2024-07-19T03:43:05.820428312Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0719 03:44:22.669844    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes/functional-365100
	I0719 03:44:22.669844    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.669844    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.669844    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.675351    3032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 03:44:22.675397    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.675454    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.675454    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.675454    3032 round_trippers.go:580]     Audit-Id: a7b052ed-49bf-4f01-b2a2-2527a2f7079e
	I0719 03:44:22.675454    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.675454    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.675454    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.675794    3032 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-19T03:43:11Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0719 03:44:22.676425    3032 pod_ready.go:92] pod "kube-scheduler-functional-365100" in "kube-system" namespace has status "Ready":"True"
	I0719 03:44:22.676491    3032 pod_ready.go:81] duration metric: took 11.8181ms for pod "kube-scheduler-functional-365100" in "kube-system" namespace to be "Ready" ...
	I0719 03:44:22.676551    3032 pod_ready.go:38] duration metric: took 10.5722403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 03:44:22.676616    3032 api_server.go:52] waiting for apiserver process to appear ...
	I0719 03:44:22.691996    3032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:44:22.721191    3032 command_runner.go:130] > 6269
	I0719 03:44:22.721191    3032 api_server.go:72] duration metric: took 19.4506821s to wait for apiserver process to appear ...
	I0719 03:44:22.721191    3032 api_server.go:88] waiting for apiserver healthz status ...
	I0719 03:44:22.721331    3032 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63174/healthz ...
	I0719 03:44:22.734012    3032 api_server.go:279] https://127.0.0.1:63174/healthz returned 200:
	ok
	I0719 03:44:22.734428    3032 round_trippers.go:463] GET https://127.0.0.1:63174/version
	I0719 03:44:22.734428    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.734428    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.734494    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.736798    3032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 03:44:22.737852    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.737852    3032 round_trippers.go:580]     Content-Length: 263
	I0719 03:44:22.737904    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.737932    3032 round_trippers.go:580]     Audit-Id: 479edf42-a11b-4527-8156-87d84a0756f7
	I0719 03:44:22.737932    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.737932    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.737932    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.737932    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.737932    3032 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 03:44:22.737932    3032 api_server.go:141] control plane version: v1.30.3
	I0719 03:44:22.737932    3032 api_server.go:131] duration metric: took 16.741ms to wait for apiserver health ...
	I0719 03:44:22.737932    3032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 03:44:22.737932    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods
	I0719 03:44:22.737932    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.737932    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.737932    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.744208    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:22.744252    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.744252    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.744252    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.744252    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.744252    3032 round_trippers.go:580]     Audit-Id: 07ab276f-c08d-429e-beed-3749f7263689
	I0719 03:44:22.744252    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.744352    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.745938    3032 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-j8zns","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"adcc4338-35fa-4b8d-a2b8-d768306b7701","resourceVersion":"489","creationTimestamp":"2024-07-19T03:43:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"be30cc86-1696-4615-ae2f-ee1803ac64c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be30cc86-1696-4615-ae2f-ee1803ac64c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52145 chars]
	I0719 03:44:22.748461    3032 system_pods.go:59] 7 kube-system pods found
	I0719 03:44:22.748784    3032 system_pods.go:61] "coredns-7db6d8ff4d-j8zns" [adcc4338-35fa-4b8d-a2b8-d768306b7701] Running
	I0719 03:44:22.748784    3032 system_pods.go:61] "etcd-functional-365100" [80f2b059-2dd0-41b0-9ed6-0c24ec06ad28] Running
	I0719 03:44:22.748784    3032 system_pods.go:61] "kube-apiserver-functional-365100" [9a8dbf29-88ca-4e14-87e5-83299d40791b] Running
	I0719 03:44:22.748784    3032 system_pods.go:61] "kube-controller-manager-functional-365100" [ca6aef72-cc5a-45ad-85ef-c3cbc7d70344] Running
	I0719 03:44:22.748784    3032 system_pods.go:61] "kube-proxy-w5wzv" [599062db-62d6-4f6e-9b1b-29f4709020f1] Running
	I0719 03:44:22.748784    3032 system_pods.go:61] "kube-scheduler-functional-365100" [4152f9c8-f230-4c69-83a8-c74692cd4e0d] Running
	I0719 03:44:22.748784    3032 system_pods.go:61] "storage-provisioner" [5c4071c6-3023-4a1d-8122-5403d1141087] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 03:44:22.748784    3032 system_pods.go:74] duration metric: took 10.8515ms to wait for pod list to return data ...
	I0719 03:44:22.748784    3032 default_sa.go:34] waiting for default service account to be created ...
	I0719 03:44:22.749044    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/default/serviceaccounts
	I0719 03:44:22.749044    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.749044    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.749044    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.753752    3032 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 03:44:22.753752    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.753752    3032 round_trippers.go:580]     Audit-Id: 81a5758d-2c0c-41e3-bf8b-d7dd71305ed2
	I0719 03:44:22.753752    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.753752    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.753752    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.753752    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.753752    3032 round_trippers.go:580]     Content-Length: 261
	I0719 03:44:22.753752    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.753752    3032 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ff798edd-fb92-4302-bd51-2329d5c1cd86","resourceVersion":"312","creationTimestamp":"2024-07-19T03:43:28Z"}}]}
	I0719 03:44:22.753752    3032 default_sa.go:45] found service account: "default"
	I0719 03:44:22.753752    3032 default_sa.go:55] duration metric: took 4.9676ms for default service account to be created ...
	I0719 03:44:22.753752    3032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 03:44:22.824062    3032 request.go:629] Waited for 70.3097ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods
	I0719 03:44:22.824264    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/namespaces/kube-system/pods
	I0719 03:44:22.824264    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:22.824264    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:22.824264    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:22.831077    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:22.831178    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:22.831229    3032 round_trippers.go:580]     Audit-Id: 2b2d22a6-b67c-44ad-9f83-54b1983bd8ab
	I0719 03:44:22.831229    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:22.831229    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:22.831229    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:22.831285    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:22.831285    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:22 GMT
	I0719 03:44:22.833427    3032 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-j8zns","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"adcc4338-35fa-4b8d-a2b8-d768306b7701","resourceVersion":"489","creationTimestamp":"2024-07-19T03:43:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"be30cc86-1696-4615-ae2f-ee1803ac64c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T03:43:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be30cc86-1696-4615-ae2f-ee1803ac64c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52145 chars]
	I0719 03:44:22.836404    3032 system_pods.go:86] 7 kube-system pods found
	I0719 03:44:22.836404    3032 system_pods.go:89] "coredns-7db6d8ff4d-j8zns" [adcc4338-35fa-4b8d-a2b8-d768306b7701] Running
	I0719 03:44:22.836404    3032 system_pods.go:89] "etcd-functional-365100" [80f2b059-2dd0-41b0-9ed6-0c24ec06ad28] Running
	I0719 03:44:22.836479    3032 system_pods.go:89] "kube-apiserver-functional-365100" [9a8dbf29-88ca-4e14-87e5-83299d40791b] Running
	I0719 03:44:22.836479    3032 system_pods.go:89] "kube-controller-manager-functional-365100" [ca6aef72-cc5a-45ad-85ef-c3cbc7d70344] Running
	I0719 03:44:22.836479    3032 system_pods.go:89] "kube-proxy-w5wzv" [599062db-62d6-4f6e-9b1b-29f4709020f1] Running
	I0719 03:44:22.836479    3032 system_pods.go:89] "kube-scheduler-functional-365100" [4152f9c8-f230-4c69-83a8-c74692cd4e0d] Running
	I0719 03:44:22.836533    3032 system_pods.go:89] "storage-provisioner" [5c4071c6-3023-4a1d-8122-5403d1141087] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 03:44:22.836555    3032 system_pods.go:126] duration metric: took 82.803ms to wait for k8s-apps to be running ...
	I0719 03:44:22.836622    3032 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 03:44:22.848208    3032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:44:22.874976    3032 system_svc.go:56] duration metric: took 38.3892ms WaitForService to wait for kubelet
	I0719 03:44:22.874976    3032 kubeadm.go:582] duration metric: took 19.6044662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 03:44:22.875045    3032 node_conditions.go:102] verifying NodePressure condition ...
	I0719 03:44:23.026600    3032 request.go:629] Waited for 151.2657ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:63174/api/v1/nodes
	I0719 03:44:23.026786    3032 round_trippers.go:463] GET https://127.0.0.1:63174/api/v1/nodes
	I0719 03:44:23.026786    3032 round_trippers.go:469] Request Headers:
	I0719 03:44:23.026786    3032 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 03:44:23.026786    3032 round_trippers.go:473]     Accept: application/json, */*
	I0719 03:44:23.033150    3032 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 03:44:23.033150    3032 round_trippers.go:577] Response Headers:
	I0719 03:44:23.033150    3032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 03:44:23.033150    3032 round_trippers.go:580]     Content-Type: application/json
	I0719 03:44:23.033150    3032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f9a8a5c9-f618-466c-8da9-057e4385fbbc
	I0719 03:44:23.033150    3032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8e11585d-ef3a-4b4f-9e96-5cd5427a24ec
	I0719 03:44:23.033150    3032 round_trippers.go:580]     Date: Fri, 19 Jul 2024 03:44:23 GMT
	I0719 03:44:23.033150    3032 round_trippers.go:580]     Audit-Id: ff08b4ca-f301-43b2-a924-a76e709e7faa
	I0719 03:44:23.033150    3032 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"functional-365100","uid":"cd0b89c1-58ba-4538-a4ce-3b1e0f71885a","resourceVersion":"398","creationTimestamp":"2024-07-19T03:43:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-365100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"functional-365100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T03_43_15_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4908 chars]
	I0719 03:44:23.034878    3032 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0719 03:44:23.034984    3032 node_conditions.go:123] node cpu capacity is 16
	I0719 03:44:23.034984    3032 node_conditions.go:105] duration metric: took 159.9378ms to run NodePressure ...
	I0719 03:44:23.034984    3032 start.go:241] waiting for startup goroutines ...
	I0719 03:44:23.034984    3032 start.go:246] waiting for cluster config update ...
	I0719 03:44:23.035081    3032 start.go:255] writing updated cluster config ...
	I0719 03:44:23.048501    3032 ssh_runner.go:195] Run: rm -f paused
	I0719 03:44:23.196697    3032 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 03:44:23.200098    3032 out.go:177] * Done! kubectl is now configured to use "functional-365100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 19 03:43:58 functional-365100 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:43:59 functional-365100 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Jul 19 03:43:59 functional-365100 systemd[1]: cri-docker.service: Deactivated successfully.
	Jul 19 03:43:59 functional-365100 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Jul 19 03:43:59 functional-365100 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Start docker client with request timeout 0s"
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Loaded network plugin cni"
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Setting cgroupDriver cgroupfs"
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jul 19 03:43:59 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:43:59Z" level=info msg="Start cri-dockerd grpc backend"
	Jul 19 03:43:59 functional-365100 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jul 19 03:44:00 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:44:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-j8zns_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b998b4c518ab9cd2aba22f765e16efd55be82e807202dd353f32a38db66cc96f\""
	Jul 19 03:44:04 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:44:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/706c38f5c6a659f4087a104a3c2f2ea84a1f681124d3b343310086d0f15afd62/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 19 03:44:04 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:44:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c2f2fd768a2e524e61368ef86666674f92910ce93ec7dc109bef18b7084f3421/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 19 03:44:05 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:44:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2d10ca73c64f449cedc312a8ff06ebc252e51d848a1f3e197fce7dd300b5ac6a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 19 03:44:05 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:44:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b6dfad7eaabd57cc5f53e4f8001647664bb18b89c91e19433e8d0e4849c625e6/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 19 03:44:05 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:44:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/829c67746d6f33ea5db50c7dbc661bda8c5dfc9016d0e5421da5bcb8b1ac6180/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 19 03:44:05 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:44:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a4f9dd38eae3caedeaa0b75ee788afd3ff7c43e5063aaec5f0d9afc102d694e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 19 03:44:05 functional-365100 cri-dockerd[5214]: time="2024-07-19T03:44:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4033f3df54cb1100afb143ef2a83b3ee8c2213c7e2906380c5af686fe62dd31b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 19 03:44:06 functional-365100 dockerd[4927]: time="2024-07-19T03:44:06.307304238Z" level=info msg="ignoring event" container=f4c4f29a1aa79ec271320bb7f05649172d7e9fd6b7b99304db8b3a386e9ce4d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1765a8caabc23       6e38f40d628db       23 seconds ago       Running             storage-provisioner       2                   c2f2fd768a2e5       storage-provisioner
	15a09365265a0       cbb01a7bd410d       41 seconds ago       Running             coredns                   1                   4033f3df54cb1       coredns-7db6d8ff4d-j8zns
	5ef50a08e9609       1f6d574d502f3       42 seconds ago       Running             kube-apiserver            1                   1a4f9dd38eae3       kube-apiserver-functional-365100
	1286d0b4da36f       76932a3b37d7e       42 seconds ago       Running             kube-controller-manager   1                   b6dfad7eaabd5       kube-controller-manager-functional-365100
	e0351b19f437c       55bb025d2cfa5       42 seconds ago       Running             kube-proxy                1                   829c67746d6f3       kube-proxy-w5wzv
	ee51b7d2994af       3edc18e7b7672       42 seconds ago       Running             kube-scheduler            1                   2d10ca73c64f4       kube-scheduler-functional-365100
	f4c4f29a1aa79       6e38f40d628db       43 seconds ago       Exited              storage-provisioner       1                   c2f2fd768a2e5       storage-provisioner
	265ed4c5fdca7       3861cfcd7c04c       43 seconds ago       Running             etcd                      1                   706c38f5c6a65       etcd-functional-365100
	ca90de69c6ec2       cbb01a7bd410d       About a minute ago   Exited              coredns                   0                   b998b4c518ab9       coredns-7db6d8ff4d-j8zns
	1403da9f26135       55bb025d2cfa5       About a minute ago   Exited              kube-proxy                0                   e544977f1296f       kube-proxy-w5wzv
	38b6def40ed6a       3edc18e7b7672       About a minute ago   Exited              kube-scheduler            0                   07526e7d36a91       kube-scheduler-functional-365100
	0299c59785cea       76932a3b37d7e       About a minute ago   Exited              kube-controller-manager   0                   77475cc414fa5       kube-controller-manager-functional-365100
	b8c6705122f0a       3861cfcd7c04c       About a minute ago   Exited              etcd                      0                   b0ed4b3fc23b3       etcd-functional-365100
	bc4194bf235f8       1f6d574d502f3       About a minute ago   Exited              kube-apiserver            0                   56ed735efa860       kube-apiserver-functional-365100
	
	
	==> coredns [15a09365265a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48196 - 5519 "HINFO IN 8623900807609243263.2505270978914545547. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.063646136s
	
	
	==> coredns [ca90de69c6ec] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-365100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-365100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=functional-365100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T03_43_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:43:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-365100
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:44:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:44:47 +0000   Fri, 19 Jul 2024 03:43:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:44:47 +0000   Fri, 19 Jul 2024 03:43:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:44:47 +0000   Fri, 19 Jul 2024 03:43:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:44:47 +0000   Fri, 19 Jul 2024 03:43:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-365100
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c86e489a6994e11addd9fa792a0b27d
	  System UUID:                8c86e489a6994e11addd9fa792a0b27d
	  Boot ID:                    732c1326-1f28-4b90-a5e2-449115b83eea
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-j8zns                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     79s
	  kube-system                 etcd-functional-365100                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-functional-365100             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-functional-365100    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-w5wzv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-functional-365100             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 75s   kube-proxy       
	  Normal  Starting                 35s   kube-proxy       
	  Normal  Starting                 93s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  93s   kubelet          Node functional-365100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s   kubelet          Node functional-365100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s   kubelet          Node functional-365100 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             92s   kubelet          Node functional-365100 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  92s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                92s   kubelet          Node functional-365100 status is now: NodeReady
	  Normal  RegisteredNode           80s   node-controller  Node functional-365100 event: Registered Node functional-365100 in Controller
	  Normal  RegisteredNode           21s   node-controller  Node functional-365100 event: Registered Node functional-365100 in Controller
	
	
	==> dmesg <==
	[Jul17 02:24] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.521008] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +2.235495] FS-Cache: Duplicate cookie detected
	[  +0.001444] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001428] FS-Cache: O-cookie d=0000000075c0e344{9P.session} n=00000000df3550a6
	[  +0.001435] FS-Cache: O-key=[10] '34323934393337383731'
	[  +0.000935] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001193] FS-Cache: N-cookie d=0000000075c0e344{9P.session} n=0000000026a083e0
	[  +0.001772] FS-Cache: N-key=[10] '34323934393337383731'
	[  +0.011112] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002013] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002955] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.006013] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.006342] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002069] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004003] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001875] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.057725] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.118975] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.646454] netlink: 'init': attribute type 4 has an invalid length.
	[Jul19 03:24] hrtimer: interrupt took 55264634 ns
	
	
	==> etcd [265ed4c5fdca] <==
	{"level":"info","ts":"2024-07-19T03:44:06.816642Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-19T03:44:06.816654Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-19T03:44:08.011137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T03:44:08.011536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T03:44:08.011579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-19T03:44:08.011604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T03:44:08.011635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-19T03:44:08.01169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-07-19T03:44:08.011706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-19T03:44:08.021691Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-365100 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T03:44:08.021709Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T03:44:08.0218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T03:44:08.050199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T03:44:08.096658Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T03:44:08.096717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T03:44:08.10325Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-19T03:44:12.497969Z","caller":"traceutil/trace.go:171","msg":"trace[582393294] linearizableReadLoop","detail":"{readStateIndex:432; appliedIndex:429; }","duration":"102.586419ms","start":"2024-07-19T03:44:12.395307Z","end":"2024-07-19T03:44:12.497894Z","steps":["trace[582393294] 'read index received'  (duration: 5.595101ms)","trace[582393294] 'applied index is now lower than readState.Index'  (duration: 96.974516ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T03:44:12.498173Z","caller":"traceutil/trace.go:171","msg":"trace[1245105083] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"103.044868ms","start":"2024-07-19T03:44:12.395108Z","end":"2024-07-19T03:44:12.498152Z","steps":["trace[1245105083] 'process raft request'  (duration: 102.727934ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:44:12.498224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.895852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-j8zns\" ","response":"range_response_count:1 size:4790"}
	{"level":"info","ts":"2024-07-19T03:44:12.498288Z","caller":"traceutil/trace.go:171","msg":"trace[75625598] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-j8zns; range_end:; response_count:1; response_revision:411; }","duration":"102.994062ms","start":"2024-07-19T03:44:12.395285Z","end":"2024-07-19T03:44:12.498279Z","steps":["trace[75625598] 'agreement among raft nodes before linearized reading'  (duration: 102.852647ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T03:44:12.498052Z","caller":"traceutil/trace.go:171","msg":"trace[619514424] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"103.642632ms","start":"2024-07-19T03:44:12.394394Z","end":"2024-07-19T03:44:12.498036Z","steps":["trace[619514424] 'process raft request'  (duration: 99.492687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:44:12.498495Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.711933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/functional-365100\" ","response":"range_response_count:1 size:572"}
	{"level":"info","ts":"2024-07-19T03:44:12.49864Z","caller":"traceutil/trace.go:171","msg":"trace[298535724] range","detail":"{range_begin:/registry/leases/kube-node-lease/functional-365100; range_end:; response_count:1; response_revision:411; }","duration":"102.902153ms","start":"2024-07-19T03:44:12.395725Z","end":"2024-07-19T03:44:12.498627Z","steps":["trace[298535724] 'agreement among raft nodes before linearized reading'  (duration: 102.644925ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:44:12.498584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.261991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-07-19T03:44:12.498871Z","caller":"traceutil/trace.go:171","msg":"trace[170261620] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:411; }","duration":"103.547322ms","start":"2024-07-19T03:44:12.395276Z","end":"2024-07-19T03:44:12.498824Z","steps":["trace[170261620] 'agreement among raft nodes before linearized reading'  (duration: 103.24989ms)"],"step_count":1}
	
	
	==> etcd [b8c6705122f0] <==
	{"level":"info","ts":"2024-07-19T03:43:08.754108Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:43:08.75427Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:43:08.754297Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:43:08.754996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T03:43:08.755224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-19T03:43:11.515842Z","caller":"traceutil/trace.go:171","msg":"trace[1561401119] transaction","detail":"{read_only:false; response_revision:3; number_of_response:1; }","duration":"102.508545ms","start":"2024-07-19T03:43:11.41331Z","end":"2024-07-19T03:43:11.515819Z","steps":["trace[1561401119] 'process raft request'  (duration: 92.639614ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T03:43:28.819431Z","caller":"traceutil/trace.go:171","msg":"trace[1226875693] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"109.439448ms","start":"2024-07-19T03:43:28.709973Z","end":"2024-07-19T03:43:28.819412Z","steps":["trace[1226875693] 'process raft request'  (duration: 89.679164ms)","trace[1226875693] 'compare'  (duration: 19.583763ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T03:43:28.819845Z","caller":"traceutil/trace.go:171","msg":"trace[46975577] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"101.50683ms","start":"2024-07-19T03:43:28.718325Z","end":"2024-07-19T03:43:28.819832Z","steps":["trace[46975577] 'process raft request'  (duration: 101.33151ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T03:43:28.819891Z","caller":"traceutil/trace.go:171","msg":"trace[764082508] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"108.895985ms","start":"2024-07-19T03:43:28.710977Z","end":"2024-07-19T03:43:28.819873Z","steps":["trace[764082508] 'process raft request'  (duration: 108.379525ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:43:29.426021Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.58037ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128030620830758231 > lease_revoke:<id:70cc90c9161d73eb>","response":"size:29"}
	{"level":"info","ts":"2024-07-19T03:43:29.426338Z","caller":"traceutil/trace.go:171","msg":"trace[1330869818] linearizableReadLoop","detail":"{readStateIndex:358; appliedIndex:357; }","duration":"100.408504ms","start":"2024-07-19T03:43:29.325915Z","end":"2024-07-19T03:43:29.426324Z","steps":["trace[1330869818] 'read index received'  (duration: 18.402µs)","trace[1330869818] 'applied index is now lower than readState.Index'  (duration: 100.388602ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:43:29.426412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.485012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-07-19T03:43:29.426453Z","caller":"traceutil/trace.go:171","msg":"trace[480652009] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:344; }","duration":"100.533118ms","start":"2024-07-19T03:43:29.325892Z","end":"2024-07-19T03:43:29.426426Z","steps":["trace[480652009] 'agreement among raft nodes before linearized reading'  (duration: 100.47021ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:43:30.508703Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.144267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4102"}
	{"level":"info","ts":"2024-07-19T03:43:30.508956Z","caller":"traceutil/trace.go:171","msg":"trace[280103754] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:355; }","duration":"106.483405ms","start":"2024-07-19T03:43:30.402409Z","end":"2024-07-19T03:43:30.508892Z","steps":["trace[280103754] 'range keys from in-memory index tree'  (duration: 106.014151ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T03:43:42.796974Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T03:43:42.797051Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-365100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-07-19T03:43:42.797213Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T03:43:42.797376Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T03:43:42.897826Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T03:43:42.897902Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T03:43:42.898072Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-07-19T03:43:42.996792Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-19T03:43:42.997286Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-19T03:43:42.997325Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-365100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 03:44:48 up 2 days,  1:20,  0 users,  load average: 1.65, 2.12, 1.70
	Linux functional-365100 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [5ef50a08e960] <==
	I0719 03:44:12.024768       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0719 03:44:12.024901       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0719 03:44:12.024909       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0719 03:44:12.026295       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0719 03:44:12.026656       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0719 03:44:12.209437       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 03:44:12.209590       1 policy_source.go:224] refreshing policies
	I0719 03:44:12.223600       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 03:44:12.223684       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 03:44:12.223709       1 aggregator.go:165] initial CRD sync complete...
	I0719 03:44:12.223716       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 03:44:12.223723       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 03:44:12.223728       1 cache.go:39] Caches are synced for autoregister controller
	I0719 03:44:12.223898       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 03:44:12.231095       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 03:44:12.231205       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 03:44:12.297866       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 03:44:12.299502       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 03:44:12.394132       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 03:44:12.394168       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 03:44:12.394266       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0719 03:44:12.595305       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 03:44:13.042303       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 03:44:25.934148       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 03:44:25.963828       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [bc4194bf235f] <==
	W0719 03:43:52.089203       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.098656       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.200832       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.242907       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.286772       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.286959       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.323526       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.335839       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.337159       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.398954       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.403043       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.449552       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.475177       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.484118       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.505165       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.525499       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.577948       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.607870       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.613048       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.672677       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.674273       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.771154       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.831169       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.832597       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 03:43:52.849154       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0299c59785ce] <==
	I0719 03:43:27.932906       1 shared_informer.go:320] Caches are synced for node
	I0719 03:43:27.933075       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0719 03:43:27.933116       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0719 03:43:27.933127       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0719 03:43:27.933137       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0719 03:43:27.965964       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="functional-365100" podCIDRs=["10.244.0.0/24"]
	I0719 03:43:28.299230       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:43:28.324670       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:43:28.324799       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 03:43:29.101364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="492.363799ms"
	I0719 03:43:29.201633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.20658ms"
	I0719 03:43:29.201830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="146.317µs"
	I0719 03:43:29.201997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.51µs"
	I0719 03:43:29.301825       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="121.714µs"
	I0719 03:43:30.301487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="200.127927ms"
	I0719 03:43:30.327937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.39255ms"
	I0719 03:43:30.328154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.705µs"
	I0719 03:43:33.003638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="202.723µs"
	I0719 03:43:33.122503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.109µs"
	I0719 03:43:33.202200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.952772ms"
	I0719 03:43:33.202456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.207µs"
	I0719 03:43:39.453125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.807µs"
	I0719 03:43:40.452103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.11µs"
	I0719 03:43:40.474502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.308µs"
	I0719 03:43:40.498727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.409µs"
	
	
	==> kube-controller-manager [1286d0b4da36] <==
	I0719 03:44:25.925828       1 shared_informer.go:320] Caches are synced for HPA
	I0719 03:44:25.926349       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 03:44:25.931218       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 03:44:25.942667       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 03:44:25.943951       1 shared_informer.go:320] Caches are synced for PVC protection
	I0719 03:44:25.950640       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0719 03:44:25.954612       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0719 03:44:25.957499       1 shared_informer.go:320] Caches are synced for daemon sets
	I0719 03:44:25.995773       1 shared_informer.go:320] Caches are synced for ephemeral
	I0719 03:44:26.017528       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 03:44:26.028101       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0719 03:44:26.031743       1 shared_informer.go:320] Caches are synced for taint
	I0719 03:44:26.032070       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0719 03:44:26.032391       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-365100"
	I0719 03:44:26.032444       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0719 03:44:26.050787       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0719 03:44:26.050955       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0719 03:44:26.051102       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0719 03:44:26.051122       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0719 03:44:26.114655       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 03:44:26.117936       1 shared_informer.go:320] Caches are synced for namespace
	I0719 03:44:26.123744       1 shared_informer.go:320] Caches are synced for service account
	I0719 03:44:26.539392       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:44:26.539564       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 03:44:26.599461       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [1403da9f2613] <==
	I0719 03:43:31.917501       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:43:31.934944       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0719 03:43:31.982241       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 03:43:31.982351       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:43:31.987075       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0719 03:43:31.987188       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0719 03:43:31.987221       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:43:31.987739       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:43:31.987846       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:43:31.988955       1 config.go:319] "Starting node config controller"
	I0719 03:43:31.989116       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:43:31.989291       1 config.go:192] "Starting service config controller"
	I0719 03:43:31.989312       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:43:31.989356       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:43:31.989370       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:43:32.090346       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:43:32.090465       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 03:43:32.090494       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e0351b19f437] <==
	I0719 03:44:07.414308       1 server_linux.go:69] "Using iptables proxy"
	E0719 03:44:07.417723       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-365100\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0719 03:44:12.396668       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0719 03:44:12.706108       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 03:44:12.706687       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:44:12.712387       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0719 03:44:12.712501       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0719 03:44:12.712527       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:44:12.713172       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:44:12.713330       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:44:12.714826       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:44:12.714967       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:44:12.715107       1 config.go:192] "Starting service config controller"
	I0719 03:44:12.715191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:44:12.715193       1 config.go:319] "Starting node config controller"
	I0719 03:44:12.715250       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:44:12.816086       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:44:12.816175       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:44:12.816272       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [38b6def40ed6] <==
	E0719 03:43:12.494899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:43:12.497528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 03:43:12.497631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 03:43:12.515161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 03:43:12.515251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 03:43:12.577988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 03:43:12.578192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 03:43:12.588597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 03:43:12.588701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 03:43:12.655922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 03:43:12.656026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 03:43:12.712396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 03:43:12.712542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 03:43:12.734780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 03:43:12.734884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 03:43:12.840553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 03:43:12.840664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 03:43:12.861593       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:43:12.861703       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 03:43:12.902972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 03:43:12.903079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 03:43:12.983571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 03:43:12.983768       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0719 03:43:15.226578       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 03:43:42.809604       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ee51b7d2994a] <==
	I0719 03:44:10.315514       1 serving.go:380] Generated self-signed cert in-memory
	W0719 03:44:12.194556       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 03:44:12.194634       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 03:44:12.194657       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 03:44:12.194671       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 03:44:12.395163       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 03:44:12.395296       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:44:12.402482       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 03:44:12.402515       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 03:44:12.402913       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 03:44:12.402551       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 03:44:12.503747       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 03:44:00 functional-365100 kubelet[2701]: I0719 03:44:00.331127    2701 status_manager.go:853] "Failed to get status for pod" podUID="8bbeeb99916289f65f9a02d3d2f22a97" pod="kube-system/etcd-functional-365100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-365100\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:04 functional-365100 kubelet[2701]: I0719 03:44:04.612349    2701 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f2fd768a2e524e61368ef86666674f92910ce93ec7dc109bef18b7084f3421"
	Jul 19 03:44:04 functional-365100 kubelet[2701]: I0719 03:44:04.909807    2701 status_manager.go:853] "Failed to get status for pod" podUID="45a4ebe0c243c0225d03aa1609c41e3d" pod="kube-system/kube-scheduler-functional-365100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-365100\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:04 functional-365100 kubelet[2701]: I0719 03:44:04.910479    2701 status_manager.go:853] "Failed to get status for pod" podUID="ce1d47ed07ecac718d5cc31bba966dcf" pod="kube-system/kube-apiserver-functional-365100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-365100\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:04 functional-365100 kubelet[2701]: I0719 03:44:04.911006    2701 status_manager.go:853] "Failed to get status for pod" podUID="ce029d3dbb37dee230fead0468e21627" pod="kube-system/kube-controller-manager-functional-365100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-365100\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:04 functional-365100 kubelet[2701]: I0719 03:44:04.912986    2701 status_manager.go:853] "Failed to get status for pod" podUID="599062db-62d6-4f6e-9b1b-29f4709020f1" pod="kube-system/kube-proxy-w5wzv" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-w5wzv\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:04 functional-365100 kubelet[2701]: I0719 03:44:04.996835    2701 status_manager.go:853] "Failed to get status for pod" podUID="adcc4338-35fa-4b8d-a2b8-d768306b7701" pod="kube-system/coredns-7db6d8ff4d-j8zns" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-j8zns\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:04 functional-365100 kubelet[2701]: I0719 03:44:04.997473    2701 status_manager.go:853] "Failed to get status for pod" podUID="5c4071c6-3023-4a1d-8122-5403d1141087" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:04 functional-365100 kubelet[2701]: I0719 03:44:04.998256    2701 status_manager.go:853] "Failed to get status for pod" podUID="8bbeeb99916289f65f9a02d3d2f22a97" pod="kube-system/etcd-functional-365100" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-365100\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:05 functional-365100 kubelet[2701]: E0719 03:44:05.336413    2701 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-365100?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Jul 19 03:44:06 functional-365100 kubelet[2701]: E0719 03:44:06.396996    2701 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-365100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-365100?resourceVersion=0&timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:06 functional-365100 kubelet[2701]: E0719 03:44:06.397709    2701 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-365100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-365100?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:06 functional-365100 kubelet[2701]: E0719 03:44:06.398193    2701 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-365100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-365100?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:06 functional-365100 kubelet[2701]: E0719 03:44:06.398956    2701 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-365100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-365100?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:06 functional-365100 kubelet[2701]: E0719 03:44:06.399402    2701 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-365100\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-365100?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 19 03:44:06 functional-365100 kubelet[2701]: E0719 03:44:06.399439    2701 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 19 03:44:07 functional-365100 kubelet[2701]: I0719 03:44:07.806287    2701 scope.go:117] "RemoveContainer" containerID="f4c4f29a1aa79ec271320bb7f05649172d7e9fd6b7b99304db8b3a386e9ce4d2"
	Jul 19 03:44:07 functional-365100 kubelet[2701]: E0719 03:44:07.806776    2701 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5c4071c6-3023-4a1d-8122-5403d1141087)\"" pod="kube-system/storage-provisioner" podUID="5c4071c6-3023-4a1d-8122-5403d1141087"
	Jul 19 03:44:09 functional-365100 kubelet[2701]: I0719 03:44:09.306901    2701 scope.go:117] "RemoveContainer" containerID="e6d8f882c618028979330c5df57e90ca00d4cc16034402b8350e72af80a5bb33"
	Jul 19 03:44:09 functional-365100 kubelet[2701]: I0719 03:44:09.307233    2701 scope.go:117] "RemoveContainer" containerID="f4c4f29a1aa79ec271320bb7f05649172d7e9fd6b7b99304db8b3a386e9ce4d2"
	Jul 19 03:44:09 functional-365100 kubelet[2701]: E0719 03:44:09.307560    2701 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5c4071c6-3023-4a1d-8122-5403d1141087)\"" pod="kube-system/storage-provisioner" podUID="5c4071c6-3023-4a1d-8122-5403d1141087"
	Jul 19 03:44:10 functional-365100 kubelet[2701]: I0719 03:44:10.815937    2701 scope.go:117] "RemoveContainer" containerID="f4c4f29a1aa79ec271320bb7f05649172d7e9fd6b7b99304db8b3a386e9ce4d2"
	Jul 19 03:44:10 functional-365100 kubelet[2701]: E0719 03:44:10.816324    2701 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5c4071c6-3023-4a1d-8122-5403d1141087)\"" pod="kube-system/storage-provisioner" podUID="5c4071c6-3023-4a1d-8122-5403d1141087"
	Jul 19 03:44:12 functional-365100 kubelet[2701]: E0719 03:44:12.099652    2701 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 19 03:44:24 functional-365100 kubelet[2701]: I0719 03:44:24.917772    2701 scope.go:117] "RemoveContainer" containerID="f4c4f29a1aa79ec271320bb7f05649172d7e9fd6b7b99304db8b3a386e9ce4d2"
	
	
	==> storage-provisioner [1765a8caabc2] <==
	I0719 03:44:25.498936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 03:44:25.521244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 03:44:25.521380       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 03:44:42.940834       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 03:44:42.941163       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"072653e5-6a29-4766-a2a9-9d07f279f3d4", APIVersion:"v1", ResourceVersion:"510", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-365100_ce41088e-5e9e-42dc-a328-87e78a1e4407 became leader
	I0719 03:44:42.941261       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-365100_ce41088e-5e9e-42dc-a328-87e78a1e4407!
	I0719 03:44:43.042441       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-365100_ce41088e-5e9e-42dc-a328-87e78a1e4407!
	
	
	==> storage-provisioner [f4c4f29a1aa7] <==
	I0719 03:44:06.111676       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 03:44:06.119866       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:44:46.103462    6508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-365100 -n functional-365100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-365100 -n functional-365100: (1.4353717s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-365100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (6.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-365100 config unset cpus" to be -""- but got *"W0719 03:45:59.856954    6472 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-365100 config get cpus: exit status 14 (350.4719ms)

                                                
                                                
** stderr ** 
	W0719 03:46:00.281643   10020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-365100 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0719 03:46:00.281643   10020 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-365100 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0719 03:46:00.638786   14464 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-365100 config get cpus" to be -""- but got *"W0719 03:46:00.969034    7980 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-365100 config unset cpus" to be -""- but got *"W0719 03:46:01.277625    5568 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-365100 config get cpus: exit status 14 (276.648ms)

                                                
                                                
** stderr ** 
	W0719 03:46:01.620933    6420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-365100 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0719 03:46:01.620933    6420 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.051754s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-392900
pause_test.go:175: expected to see error and volume "docker volume inspect pause-392900" to not exist after deletion but got no error and this output: 
-- stdout --
	[
	    {
	        "CreatedAt": "2024-07-19T04:54:30Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-392900"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-392900/_data",
	        "Name": "pause-392900",
	        "Options": null,
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
pause_test.go:178: (dbg) Run:  docker network ls
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-392900
helpers_test.go:235: (dbg) docker inspect pause-392900:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2024-07-19T04:54:30Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-392900"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-392900/_data",
	        "Name": "pause-392900",
	        "Options": null,
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-392900 -n pause-392900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-392900 -n pause-392900: exit status 85 (405.6428ms)

                                                
                                                
-- stdout --
	* Profile "pause-392900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-392900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:59:07.242451    6556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-392900" host is not running, skipping log retrieval (state="* Profile \"pause-392900\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-392900\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-392900
helpers_test.go:235: (dbg) docker inspect pause-392900:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2024-07-19T04:54:30Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-392900"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-392900/_data",
	        "Name": "pause-392900",
	        "Options": null,
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-392900 -n pause-392900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-392900 -n pause-392900: exit status 85 (610.5202ms)

                                                
                                                
-- stdout --
	* Profile "pause-392900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-392900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:59:07.939187   14200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-392900" host is not running, skipping log retrieval (state="* Profile \"pause-392900\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-392900\"")
--- FAIL: TestPause/serial/VerifyDeletedResources (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (427.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-546500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0719 05:17:31.039490   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:17:39.657043   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 05:17:50.458554   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:17:55.221894   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:18:01.329326   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:18:10.612963   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:18:47.068565   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:19:03.551308   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 05:20:06.493434   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:20:11.261017   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:20:17.394130   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-546500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: exit status 102 (6m58.7297248s)

                                                
                                                
-- stdout --
	* [old-k8s-version-546500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-546500" primary control-plane node in "old-k8s-version-546500" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Restarting existing docker container for "old-k8s-version-546500" ...
	* Preparing Kubernetes v1.20.0 on Docker 27.0.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-546500 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:17:30.976856    8448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 05:17:31.069312    8448 out.go:291] Setting OutFile to fd 1848 ...
	I0719 05:17:31.069312    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:17:31.069312    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:17:31.069312    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:17:31.100480    8448 out.go:298] Setting JSON to false
	I0719 05:17:31.104944    8448 start.go:129] hostinfo: {"hostname":"minikube3","uptime":183236,"bootTime":1721183014,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 05:17:31.104944    8448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 05:17:31.110365    8448 out.go:177] * [old-k8s-version-546500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 05:17:31.113425    8448 notify.go:220] Checking for updates...
	I0719 05:17:31.117083    8448 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 05:17:31.120536    8448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:17:31.122607    8448 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 05:17:31.126188    8448 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:17:31.130009    8448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:17:31.133619    8448 config.go:182] Loaded profile config "old-k8s-version-546500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 05:17:31.137588    8448 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 05:17:31.140504    8448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:17:31.466625    8448 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 05:17:31.484622    8448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:17:31.876107    8448 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-19 05:17:31.824667739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:17:31.880100    8448 out.go:177] * Using the docker driver based on existing profile
	I0719 05:17:31.883085    8448 start.go:297] selected driver: docker
	I0719 05:17:31.883085    8448 start.go:901] validating driver "docker" against &{Name:old-k8s-version-546500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-546500 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:17:31.883085    8448 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:17:32.026124    8448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:17:32.440716    8448 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-19 05:17:32.401813231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:17:32.441763    8448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:17:32.441828    8448 cni.go:84] Creating CNI manager for ""
	I0719 05:17:32.441900    8448 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 05:17:32.442035    8448 start.go:340] cluster config:
	{Name:old-k8s-version-546500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-546500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:17:32.447982    8448 out.go:177] * Starting "old-k8s-version-546500" primary control-plane node in "old-k8s-version-546500" cluster
	I0719 05:17:32.449972    8448 cache.go:121] Beginning downloading kic base image for docker with docker
	I0719 05:17:32.452975    8448 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0719 05:17:32.455978    8448 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 05:17:32.455978    8448 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 05:17:32.455978    8448 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0719 05:17:32.455978    8448 cache.go:56] Caching tarball of preloaded images
	I0719 05:17:32.456973    8448 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:17:32.456973    8448 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 05:17:32.456973    8448 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\config.json ...
	W0719 05:17:32.697183    8448 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0719 05:17:32.697183    8448 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 05:17:32.697183    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:17:32.698227    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:17:32.698227    8448 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 05:17:32.698227    8448 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 05:17:32.698227    8448 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 05:17:32.698227    8448 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 05:17:32.698227    8448 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0719 05:17:32.698227    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:17:33.302099    8448 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0719 05:17:33.302099    8448 cache.go:194] Successfully downloaded all kic artifacts
	I0719 05:17:33.302099    8448 start.go:360] acquireMachinesLock for old-k8s-version-546500: {Name:mk4f60898aa5f7cab92e167681f6cfa4a13bd45f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:33.302099    8448 start.go:364] duration metric: took 0s to acquireMachinesLock for "old-k8s-version-546500"
	I0719 05:17:33.302099    8448 start.go:96] Skipping create...Using existing machine configuration
	I0719 05:17:33.302099    8448 fix.go:54] fixHost starting: 
	I0719 05:17:33.324101    8448 cli_runner.go:164] Run: docker container inspect old-k8s-version-546500 --format={{.State.Status}}
	I0719 05:17:33.555483    8448 fix.go:112] recreateIfNeeded on old-k8s-version-546500: state=Stopped err=<nil>
	W0719 05:17:33.556524    8448 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 05:17:33.559618    8448 out.go:177] * Restarting existing docker container for "old-k8s-version-546500" ...
	I0719 05:17:33.577194    8448 cli_runner.go:164] Run: docker start old-k8s-version-546500
	I0719 05:17:34.715589    8448 cli_runner.go:217] Completed: docker start old-k8s-version-546500: (1.1383861s)
	I0719 05:17:34.730588    8448 cli_runner.go:164] Run: docker container inspect old-k8s-version-546500 --format={{.State.Status}}
	I0719 05:17:34.950493    8448 kic.go:430] container "old-k8s-version-546500" state is running.
	I0719 05:17:34.964490    8448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-546500
	I0719 05:17:35.172798    8448 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\config.json ...
	I0719 05:17:35.176814    8448 machine.go:94] provisionDockerMachine start ...
	I0719 05:17:35.194811    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:35.435893    8448 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:35.436902    8448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52539 <nil> <nil>}
	I0719 05:17:35.436902    8448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:17:35.439895    8448 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0719 05:17:38.637073    8448 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-546500
	
	I0719 05:17:38.637073    8448 ubuntu.go:169] provisioning hostname "old-k8s-version-546500"
	I0719 05:17:38.650079    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:38.878682    8448 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:38.879677    8448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52539 <nil> <nil>}
	I0719 05:17:38.879677    8448 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-546500 && echo "old-k8s-version-546500" | sudo tee /etc/hostname
	I0719 05:17:39.095042    8448 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-546500
	
	I0719 05:17:39.110020    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:39.326192    8448 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:39.327201    8448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52539 <nil> <nil>}
	I0719 05:17:39.327201    8448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-546500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-546500/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-546500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:17:39.503595    8448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:17:39.503595    8448 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0719 05:17:39.503595    8448 ubuntu.go:177] setting up certificates
	I0719 05:17:39.503595    8448 provision.go:84] configureAuth start
	I0719 05:17:39.519611    8448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-546500
	I0719 05:17:39.727987    8448 provision.go:143] copyHostCerts
	I0719 05:17:39.727987    8448 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:17:39.727987    8448 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0719 05:17:39.728992    8448 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0719 05:17:39.729993    8448 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:17:39.729993    8448 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0719 05:17:39.730991    8448 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:17:39.731986    8448 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:17:39.731986    8448 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0719 05:17:39.731986    8448 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:17:39.733044    8448 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-546500 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-546500]
	I0719 05:17:40.048638    8448 provision.go:177] copyRemoteCerts
	I0719 05:17:40.071624    8448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:17:40.082639    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:40.305698    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	I0719 05:17:40.446263    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 05:17:40.530756    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 05:17:40.590153    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:17:40.650269    8448 provision.go:87] duration metric: took 1.1466655s to configureAuth
	I0719 05:17:40.650269    8448 ubuntu.go:193] setting minikube options for container-runtime
	I0719 05:17:40.651263    8448 config.go:182] Loaded profile config "old-k8s-version-546500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 05:17:40.662259    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:40.895632    8448 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:40.896192    8448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52539 <nil> <nil>}
	I0719 05:17:40.896192    8448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:17:41.088578    8448 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0719 05:17:41.088669    8448 ubuntu.go:71] root file system type: overlay
	I0719 05:17:41.088739    8448 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:17:41.102366    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:41.374704    8448 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:41.375711    8448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52539 <nil> <nil>}
	I0719 05:17:41.375711    8448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:17:41.592876    8448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:17:41.610923    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:41.832310    8448 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:41.832310    8448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52539 <nil> <nil>}
	I0719 05:17:41.832310    8448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:17:42.028383    8448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:17:42.028939    8448 machine.go:97] duration metric: took 6.8520728s to provisionDockerMachine
	I0719 05:17:42.029009    8448 start.go:293] postStartSetup for "old-k8s-version-546500" (driver="docker")
	I0719 05:17:42.029009    8448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:17:42.049935    8448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:17:42.064780    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:42.291430    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	I0719 05:17:42.509016    8448 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:17:42.555033    8448 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0719 05:17:42.555217    8448 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0719 05:17:42.555278    8448 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0719 05:17:42.555278    8448 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0719 05:17:42.555330    8448 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0719 05:17:42.555785    8448 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0719 05:17:42.556857    8448 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem -> 109722.pem in /etc/ssl/certs
	I0719 05:17:42.589349    8448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:17:42.612355    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem --> /etc/ssl/certs/109722.pem (1708 bytes)
	I0719 05:17:42.676804    8448 start.go:296] duration metric: took 647.7899ms for postStartSetup
	I0719 05:17:42.691897    8448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 05:17:42.701797    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:42.910752    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	I0719 05:17:43.054773    8448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0719 05:17:43.073294    8448 fix.go:56] duration metric: took 9.7711192s for fixHost
	I0719 05:17:43.073294    8448 start.go:83] releasing machines lock for "old-k8s-version-546500", held for 9.7711192s
	I0719 05:17:43.092766    8448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-546500
	I0719 05:17:43.321099    8448 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:17:43.333083    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:43.335089    8448 ssh_runner.go:195] Run: cat /version.json
	I0719 05:17:43.349094    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:43.552083    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	I0719 05:17:43.571114    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	W0719 05:17:43.681106    8448 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:17:43.712087    8448 ssh_runner.go:195] Run: systemctl --version
	I0719 05:17:43.738099    8448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:17:43.763864    8448 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0719 05:17:43.786876    8448 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0719 05:17:43.786876    8448 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	W0719 05:17:43.786876    8448 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0719 05:17:43.806861    8448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0719 05:17:43.858859    8448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0719 05:17:43.898930    8448 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:17:43.898930    8448 start.go:495] detecting cgroup driver to use...
	I0719 05:17:43.898930    8448 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 05:17:43.898930    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:17:43.952142    8448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0719 05:17:43.993940    8448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:17:44.030753    8448 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:17:44.050632    8448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:17:44.106389    8448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:17:44.143377    8448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:17:44.185007    8448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:17:44.225097    8448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:17:44.264156    8448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:17:44.313610    8448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:17:44.346609    8448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:17:44.384830    8448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:17:44.593441    8448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:17:44.801486    8448 start.go:495] detecting cgroup driver to use...
	I0719 05:17:44.801486    8448 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 05:17:44.819999    8448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:17:44.851982    8448 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0719 05:17:44.866992    8448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:17:44.904165    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:17:44.979205    8448 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:17:45.015979    8448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:17:45.037580    8448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 05:17:45.091572    8448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:17:45.311043    8448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:17:45.506510    8448 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:17:45.507567    8448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:17:45.576571    8448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:17:45.754463    8448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:17:48.638527    8448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8840419s)
	I0719 05:17:48.654602    8448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:17:48.734232    8448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:17:48.808543    8448 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 27.0.3 ...
	I0719 05:17:48.826911    8448 cli_runner.go:164] Run: docker exec -t old-k8s-version-546500 dig +short host.docker.internal
	I0719 05:17:49.115620    8448 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0719 05:17:49.130075    8448 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0719 05:17:49.140088    8448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:17:49.185606    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:17:49.383980    8448 kubeadm.go:883] updating cluster {Name:old-k8s-version-546500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-546500 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jen
kins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:17:49.384314    8448 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 05:17:49.396551    8448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:17:49.443195    8448 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0719 05:17:49.443195    8448 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0719 05:17:49.462479    8448 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 05:17:49.503759    8448 ssh_runner.go:195] Run: which lz4
	I0719 05:17:49.529757    8448 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 05:17:49.542526    8448 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 05:17:49.542597    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
	I0719 05:18:04.148561    8448 docker.go:649] duration metric: took 14.6346919s to copy over tarball
	I0719 05:18:04.166871    8448 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 05:18:09.276691    8448 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.1096943s)
	I0719 05:18:09.276793    8448 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 05:18:09.395130    8448 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 05:18:09.420569    8448 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2824 bytes)
	I0719 05:18:09.471487    8448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:18:09.660802    8448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:18:19.971783    8448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (10.310902s)
	I0719 05:18:19.984999    8448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:18:20.039797    8448 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0719 05:18:20.039797    8448 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0719 05:18:20.039797    8448 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 05:18:20.055954    8448 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:18:20.072025    8448 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:18:20.077424    8448 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:18:20.077424    8448 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:18:20.084129    8448 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:18:20.086759    8448 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:18:20.094566    8448 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 05:18:20.096167    8448 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:18:20.101953    8448 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 05:18:20.103480    8448 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:18:20.113832    8448 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:18:20.115477    8448 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 05:18:20.121182    8448 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 05:18:20.123148    8448 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:18:20.129224    8448 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:18:20.136152    8448 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	W0719 05:18:20.201683    8448 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0719 05:18:20.293234    8448 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0719 05:18:20.401102    8448 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0719 05:18:20.508627    8448 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0719 05:18:20.618841    8448 image.go:187] authn lookup for registry.k8s.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0719 05:18:20.694031    8448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0719 05:18:20.727038    8448 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0719 05:18:20.811032    8448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:18:20.820032    8448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	W0719 05:18:20.853034    8448 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0719 05:18:20.870508    8448 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 05:18:20.870508    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.20.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0719 05:18:20.870508    8448 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:18:20.880510    8448 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 05:18:20.880510    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.20.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0719 05:18:20.880510    8448 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:18:20.883505    8448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:18:20.885515    8448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:18:20.899071    8448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:18:20.925400    8448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 05:18:20.953390    8448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 05:18:20.972385    8448 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 05:18:20.972385    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.20.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0719 05:18:20.972385    8448 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	W0719 05:18:20.979384    8448 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0719 05:18:20.980384    8448 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0719 05:18:20.980384    8448 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0719 05:18:20.986386    8448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:18:21.091246    8448 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 05:18:21.091246    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0719 05:18:21.091246    8448 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 05:18:21.091246    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.7.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0719 05:18:21.091246    8448 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 05:18:21.091246    8448 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0719 05:18:21.104083    8448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0719 05:18:21.104083    8448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
	I0719 05:18:21.140396    8448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 05:18:21.187294    8448 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0719 05:18:21.264258    8448 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0719 05:18:21.270176    8448 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0719 05:18:21.277046    8448 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 05:18:21.277046    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.13-0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0719 05:18:21.277785    8448 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:18:21.282568    8448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:18:21.295072    8448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
	I0719 05:18:21.360847    8448 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 05:18:21.360847    8448 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.20.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0719 05:18:21.360847    8448 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:18:21.364784    8448 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0719 05:18:21.372769    8448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:18:21.424271    8448 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0719 05:18:21.424271    8448 cache_images.go:92] duration metric: took 1.3844636s to LoadCachedImages
	W0719 05:18:21.424819    8448 out.go:239] X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0: The system cannot find the file specified.
	X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0: The system cannot find the file specified.
	I0719 05:18:21.424819    8448 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
	I0719 05:18:21.425109    8448 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-546500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-546500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:18:21.436035    8448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 05:18:21.562875    8448 cni.go:84] Creating CNI manager for ""
	I0719 05:18:21.562875    8448 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 05:18:21.562875    8448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 05:18:21.562875    8448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-546500 NodeName:old-k8s-version-546500 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 05:18:21.563625    8448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-546500"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:18:21.578889    8448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 05:18:21.605421    8448 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:18:21.622893    8448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:18:21.649357    8448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0719 05:18:21.685261    8448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 05:18:21.718760    8448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0719 05:18:21.785276    8448 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0719 05:18:21.798839    8448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:18:21.850382    8448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:18:22.027383    8448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:18:22.061058    8448 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500 for IP: 192.168.76.2
	I0719 05:18:22.061112    8448 certs.go:194] generating shared ca certs ...
	I0719 05:18:22.061112    8448 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:18:22.062031    8448 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0719 05:18:22.062758    8448 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0719 05:18:22.062943    8448 certs.go:256] generating profile certs ...
	I0719 05:18:22.063707    8448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\client.key
	I0719 05:18:22.064073    8448 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\apiserver.key.44d1f9b8
	I0719 05:18:22.064506    8448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\proxy-client.key
	I0719 05:18:22.066027    8448 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972.pem (1338 bytes)
	W0719 05:18:22.066310    8448 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972_empty.pem, impossibly tiny 0 bytes
	I0719 05:18:22.066310    8448 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 05:18:22.066310    8448 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0719 05:18:22.067347    8448 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 05:18:22.067726    8448 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 05:18:22.067975    8448 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem (1708 bytes)
	I0719 05:18:22.070968    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:18:22.131695    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 05:18:22.193141    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:18:22.270129    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:18:22.326057    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 05:18:22.403310    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 05:18:22.480947    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:18:22.553562    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-546500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 05:18:22.615375    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972.pem --> /usr/share/ca-certificates/10972.pem (1338 bytes)
	I0719 05:18:22.696341    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem --> /usr/share/ca-certificates/109722.pem (1708 bytes)
	I0719 05:18:22.811994    8448 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:18:22.863445    8448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:18:22.918216    8448 ssh_runner.go:195] Run: openssl version
	I0719 05:18:22.971588    8448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10972.pem && ln -fs /usr/share/ca-certificates/10972.pem /etc/ssl/certs/10972.pem"
	I0719 05:18:23.083069    8448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10972.pem
	I0719 05:18:23.162533    8448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:42 /usr/share/ca-certificates/10972.pem
	I0719 05:18:23.187229    8448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10972.pem
	I0719 05:18:23.285016    8448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10972.pem /etc/ssl/certs/51391683.0"
	I0719 05:18:23.379513    8448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109722.pem && ln -fs /usr/share/ca-certificates/109722.pem /etc/ssl/certs/109722.pem"
	I0719 05:18:23.480786    8448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109722.pem
	I0719 05:18:23.495130    8448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:42 /usr/share/ca-certificates/109722.pem
	I0719 05:18:23.512742    8448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109722.pem
	I0719 05:18:23.550264    8448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109722.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:18:23.593198    8448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:18:23.642338    8448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:18:23.659266    8448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:18:23.673229    8448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:18:23.708431    8448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:18:23.747349    8448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:18:23.775687    8448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 05:18:23.812362    8448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 05:18:23.850229    8448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 05:18:23.882252    8448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 05:18:23.915255    8448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 05:18:23.975242    8448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 05:18:24.057020    8448 kubeadm.go:392] StartCluster: {Name:old-k8s-version-546500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-546500 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkin
s.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:18:24.071856    8448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 05:18:24.201085    8448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 05:18:24.267349    8448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 05:18:24.267349    8448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 05:18:24.288359    8448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 05:18:24.362811    8448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 05:18:24.378554    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:18:24.613822    8448 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-546500" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 05:18:24.615866    8448 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-546500" cluster setting kubeconfig missing "old-k8s-version-546500" context setting]
	I0719 05:18:24.618817    8448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:18:24.661807    8448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 05:18:24.688907    8448 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0719 05:18:24.688907    8448 kubeadm.go:597] duration metric: took 421.5547ms to restartPrimaryControlPlane
	I0719 05:18:24.688907    8448 kubeadm.go:394] duration metric: took 631.8822ms to StartCluster
	I0719 05:18:24.688907    8448 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:18:24.689574    8448 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 05:18:24.693065    8448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:18:24.695086    8448 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 05:18:24.695086    8448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 05:18:24.696167    8448 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-546500"
	I0719 05:18:24.696167    8448 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-546500"
	I0719 05:18:24.696167    8448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-546500"
	I0719 05:18:24.696167    8448 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-546500"
	W0719 05:18:24.696167    8448 addons.go:243] addon storage-provisioner should already be in state true
	I0719 05:18:24.696167    8448 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-546500"
	I0719 05:18:24.696167    8448 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-546500"
	I0719 05:18:24.696167    8448 config.go:182] Loaded profile config "old-k8s-version-546500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	W0719 05:18:24.696167    8448 addons.go:243] addon metrics-server should already be in state true
	I0719 05:18:24.696167    8448 addons.go:69] Setting dashboard=true in profile "old-k8s-version-546500"
	I0719 05:18:24.696167    8448 host.go:66] Checking if "old-k8s-version-546500" exists ...
	I0719 05:18:24.696167    8448 host.go:66] Checking if "old-k8s-version-546500" exists ...
	I0719 05:18:24.696167    8448 addons.go:234] Setting addon dashboard=true in "old-k8s-version-546500"
	W0719 05:18:24.696167    8448 addons.go:243] addon dashboard should already be in state true
	I0719 05:18:24.697065    8448 host.go:66] Checking if "old-k8s-version-546500" exists ...
	I0719 05:18:24.700068    8448 out.go:177] * Verifying Kubernetes components...
	I0719 05:18:24.728069    8448 cli_runner.go:164] Run: docker container inspect old-k8s-version-546500 --format={{.State.Status}}
	I0719 05:18:24.730073    8448 cli_runner.go:164] Run: docker container inspect old-k8s-version-546500 --format={{.State.Status}}
	I0719 05:18:24.732072    8448 cli_runner.go:164] Run: docker container inspect old-k8s-version-546500 --format={{.State.Status}}
	I0719 05:18:24.732072    8448 cli_runner.go:164] Run: docker container inspect old-k8s-version-546500 --format={{.State.Status}}
	I0719 05:18:24.732072    8448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:18:24.977915    8448 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-546500"
	W0719 05:18:24.978906    8448 addons.go:243] addon default-storageclass should already be in state true
	I0719 05:18:24.978906    8448 host.go:66] Checking if "old-k8s-version-546500" exists ...
	I0719 05:18:24.986910    8448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:18:24.989917    8448 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:18:24.989917    8448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 05:18:25.000899    8448 cli_runner.go:164] Run: docker container inspect old-k8s-version-546500 --format={{.State.Status}}
	I0719 05:18:25.001929    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:18:25.002909    8448 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0719 05:18:25.006920    8448 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0719 05:18:25.009916    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0719 05:18:25.009916    8448 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0719 05:18:25.018913    8448 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 05:18:25.022926    8448 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 05:18:25.022926    8448 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 05:18:25.023904    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:18:25.034919    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:18:25.255712    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	I0719 05:18:25.269704    8448 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 05:18:25.269704    8448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 05:18:25.270722    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	I0719 05:18:25.284716    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	I0719 05:18:25.285722    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:18:25.507406    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52539 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-546500\id_rsa Username:docker}
	I0719 05:18:25.675078    8448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:18:25.851658    8448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 05:18:25.851658    8448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 05:18:25.883282    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-546500
	I0719 05:18:25.960532    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0719 05:18:25.960532    8448 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0719 05:18:25.977524    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:18:26.085264    8448 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-546500" to be "Ready" ...
	I0719 05:18:26.162901    8448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 05:18:26.162901    8448 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 05:18:26.264928    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0719 05:18:26.264928    8448 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0719 05:18:26.376443    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 05:18:26.464466    8448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:18:26.464466    8448 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 05:18:26.563360    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0719 05:18:26.563360    8448 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0719 05:18:26.784394    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:18:26.856839    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0719 05:18:26.856839    8448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0719 05:18:26.967706    8448 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:26.967706    8448 retry.go:31] will retry after 136.600808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:27.073980    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0719 05:18:27.073980    8448 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0719 05:18:27.083635    8448 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:27.083635    8448 retry.go:31] will retry after 192.688152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:27.124849    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:18:27.266230    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0719 05:18:27.266791    8448 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0719 05:18:27.294233    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0719 05:18:27.452391    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0719 05:18:27.452391    8448 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0719 05:18:27.458154    8448 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:27.458154    8448 retry.go:31] will retry after 193.905167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:27.652456    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0719 05:18:27.652642    8448 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0719 05:18:27.674845    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0719 05:18:27.860108    8448 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:27.860108    8448 retry.go:31] will retry after 471.449298ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:27.954657    8448 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0719 05:18:27.954895    8448 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0719 05:18:28.156938    8448 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:28.157313    8448 retry.go:31] will retry after 470.441976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:28.176459    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0719 05:18:28.374065    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:18:28.646604    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0719 05:18:28.658006    8448 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:28.658006    8448 retry.go:31] will retry after 197.992452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:28.880346    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0719 05:18:29.163114    8448 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:29.163206    8448 retry.go:31] will retry after 195.516305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:29.379277    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0719 05:18:29.461449    8448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0871784s)
	W0719 05:18:29.461449    8448 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:29.461449    8448 retry.go:31] will retry after 809.232529ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0719 05:18:30.294045    8448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:18:37.757066    8448 node_ready.go:49] node "old-k8s-version-546500" has status "Ready":"True"
	I0719 05:18:37.757066    8448 node_ready.go:38] duration metric: took 11.6717121s for node "old-k8s-version-546500" to be "Ready" ...
	I0719 05:18:37.757066    8448 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:18:38.169783    8448 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-9mmxg" in "kube-system" namespace to be "Ready" ...
	I0719 05:18:38.755064    8448 pod_ready.go:92] pod "coredns-74ff55c5b-9mmxg" in "kube-system" namespace has status "Ready":"True"
	I0719 05:18:38.755064    8448 pod_ready.go:81] duration metric: took 585.2761ms for pod "coredns-74ff55c5b-9mmxg" in "kube-system" namespace to be "Ready" ...
	I0719 05:18:38.755064    8448 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-546500" in "kube-system" namespace to be "Ready" ...
	I0719 05:18:39.956626    8448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.309754s)
	I0719 05:18:40.661394    8448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.7809579s)
	I0719 05:18:40.662387    8448 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-546500"
	I0719 05:18:40.876421    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:18:42.348720    8448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.9693431s)
	I0719 05:18:42.349366    8448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.0552288s)
	I0719 05:18:42.352951    8448 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-546500 addons enable metrics-server
	
	I0719 05:18:42.358323    8448 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0719 05:18:42.361895    8448 addons.go:510] duration metric: took 17.6666731s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0719 05:18:43.455385    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:18:45.782179    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:18:48.283250    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:18:50.290698    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:18:52.777717    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:18:54.779683    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:18:56.779898    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:18:58.788363    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:01.288641    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:03.783315    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:06.287847    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:08.781026    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:10.789927    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:13.283605    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:15.775348    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:17.788235    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:20.280839    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:22.582646    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:24.786879    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:26.788488    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:29.284766    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:31.288030    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:33.776079    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:35.779039    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:37.782379    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:39.783595    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:42.275714    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:44.275807    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:46.279952    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:48.283992    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:50.774783    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:52.844180    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:55.284302    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:57.293505    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:19:59.783443    8448 pod_ready.go:102] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:00.275231    8448 pod_ready.go:92] pod "etcd-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"True"
	I0719 05:20:00.275231    8448 pod_ready.go:81] duration metric: took 1m21.5195349s for pod "etcd-old-k8s-version-546500" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:00.275231    8448 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-546500" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:00.291881    8448 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"True"
	I0719 05:20:00.291881    8448 pod_ready.go:81] duration metric: took 16.6498ms for pod "kube-apiserver-old-k8s-version-546500" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:00.291881    8448 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-546500" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:00.307661    8448 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"True"
	I0719 05:20:00.308391    8448 pod_ready.go:81] duration metric: took 16.5098ms for pod "kube-controller-manager-old-k8s-version-546500" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:00.308454    8448 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sgsqg" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:00.320365    8448 pod_ready.go:92] pod "kube-proxy-sgsqg" in "kube-system" namespace has status "Ready":"True"
	I0719 05:20:00.320365    8448 pod_ready.go:81] duration metric: took 11.9108ms for pod "kube-proxy-sgsqg" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:00.320365    8448 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-546500" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:02.351045    8448 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:04.850932    8448 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:05.338689    8448 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-546500" in "kube-system" namespace has status "Ready":"True"
	I0719 05:20:05.338872    8448 pod_ready.go:81] duration metric: took 5.0184678s for pod "kube-scheduler-old-k8s-version-546500" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:05.338872    8448 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace to be "Ready" ...
	I0719 05:20:07.367072    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:09.860018    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:11.861045    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:13.861320    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:15.865300    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:17.868624    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:20.363565    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:22.861247    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:24.866552    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:26.867013    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:28.878461    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:31.371642    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:33.865667    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:35.866548    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:37.867986    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:40.373913    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:42.868606    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:45.365936    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:47.856432    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:49.866439    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:52.368866    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:54.372642    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:56.379440    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:20:58.873921    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:01.361555    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:03.878404    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:06.388905    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:08.864563    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:10.916641    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:13.376011    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:15.962031    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:18.361981    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:20.367422    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:22.870733    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:24.889390    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:27.372962    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:29.983480    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:32.368701    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:34.538265    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:36.874845    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:39.555944    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:41.875706    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:44.405645    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:46.865610    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:48.941374    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:51.609634    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:53.863867    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:55.865083    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:58.385934    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:00.871815    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:02.880104    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:05.369375    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:07.370287    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:09.371867    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:11.857788    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:13.864191    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:15.875303    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:18.356431    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:20.377080    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:22.862944    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:24.868645    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:27.371934    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:29.372753    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:31.868218    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:34.409873    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:36.860710    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:38.865537    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:40.866196    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:42.879909    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:45.366317    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:47.383374    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:49.417690    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:51.858916    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:53.869385    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:56.367170    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:59.336598    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:01.363120    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:03.370433    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:05.858873    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:07.875450    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:10.372154    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:12.870275    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:15.938076    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:18.370918    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:20.879375    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:23.356373    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:25.372220    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:27.866830    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:30.367459    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:32.870799    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:35.363291    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:37.376972    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:39.858778    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:41.866900    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:43.873136    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:46.366634    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:48.371499    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:50.862112    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:53.361071    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:55.367163    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:57.865858    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:59.866230    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:24:02.411161    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:24:04.857284    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:24:05.349929    8448 pod_ready.go:81] duration metric: took 4m0.0091187s for pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace to be "Ready" ...
	E0719 05:24:05.350042    8448 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 05:24:05.350042    8448 pod_ready.go:38] duration metric: took 5m27.5904265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:24:05.350126    8448 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:24:05.361122    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:24:05.433091    8448 logs.go:276] 2 containers: [891502bd603e 7d9c9067b30f]
	I0719 05:24:05.445989    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:24:05.534248    8448 logs.go:276] 2 containers: [41eb1254f9cf 5bd4db013300]
	I0719 05:24:05.545202    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:24:05.612606    8448 logs.go:276] 2 containers: [d46bbd6b65c5 59a10474c608]
	I0719 05:24:05.625073    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:24:05.684340    8448 logs.go:276] 2 containers: [95dad6d99e18 105b39486f2b]
	I0719 05:24:05.696662    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:24:05.747977    8448 logs.go:276] 2 containers: [756b5c94abf2 4aae10524ed9]
	I0719 05:24:05.758646    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:24:05.810247    8448 logs.go:276] 2 containers: [e213f42e7fc3 43e043a0349d]
	I0719 05:24:05.819720    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:24:05.868800    8448 logs.go:276] 0 containers: []
	W0719 05:24:05.869380    8448 logs.go:278] No container was found matching "kindnet"
	I0719 05:24:05.881404    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:24:05.929550    8448 logs.go:276] 2 containers: [ccd5bac65a54 cf6d6836594d]
	I0719 05:24:05.940645    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:24:05.990663    8448 logs.go:276] 1 containers: [91ec1ba3377b]
	I0719 05:24:05.990663    8448 logs.go:123] Gathering logs for kube-proxy [756b5c94abf2] ...
	I0719 05:24:05.990663    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756b5c94abf2"
	I0719 05:24:06.043052    8448 logs.go:123] Gathering logs for kube-controller-manager [e213f42e7fc3] ...
	I0719 05:24:06.043052    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e213f42e7fc3"
	I0719 05:24:06.122579    8448 logs.go:123] Gathering logs for kube-controller-manager [43e043a0349d] ...
	I0719 05:24:06.122579    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43e043a0349d"
	I0719 05:24:06.232883    8448 logs.go:123] Gathering logs for storage-provisioner [ccd5bac65a54] ...
	I0719 05:24:06.232883    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd5bac65a54"
	I0719 05:24:06.284477    8448 logs.go:123] Gathering logs for storage-provisioner [cf6d6836594d] ...
	I0719 05:24:06.284477    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6d6836594d"
	I0719 05:24:06.333041    8448 logs.go:123] Gathering logs for kubelet ...
	I0719 05:24:06.333041    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 05:24:06.406469    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:37 old-k8s-version-546500 kubelet[1892]: E0719 05:18:37.459293    1892 reflector.go:138] object-"kube-system"/"kube-proxy-token-2pc7z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2pc7z" is forbidden: User "system:node:old-k8s-version-546500" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-546500' and this object
	W0719 05:24:06.406469    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:37 old-k8s-version-546500 kubelet[1892]: E0719 05:18:37.460370    1892 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-546500" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-546500' and this object
	W0719 05:24:06.415518    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:43 old-k8s-version-546500 kubelet[1892]: E0719 05:18:43.863000    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.417808    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:44 old-k8s-version-546500 kubelet[1892]: E0719 05:18:44.255969    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.419510    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:45 old-k8s-version-546500 kubelet[1892]: E0719 05:18:45.411157    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.421457    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:00 old-k8s-version-546500 kubelet[1892]: E0719 05:19:00.224923    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.425541    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:03 old-k8s-version-546500 kubelet[1892]: E0719 05:19:03.887468    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.426557    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:04 old-k8s-version-546500 kubelet[1892]: E0719 05:19:04.419175    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.426557    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:05 old-k8s-version-546500 kubelet[1892]: E0719 05:19:05.437479    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.427570    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:06 old-k8s-version-546500 kubelet[1892]: E0719 05:19:06.485419    1892 pod_workers.go:191] Error syncing pod a5922a7c-6975-4659-8506-b800bd24f542 ("storage-provisioner_kube-system(a5922a7c-6975-4659-8506-b800bd24f542)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a5922a7c-6975-4659-8506-b800bd24f542)"
	W0719 05:24:06.428126    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:14 old-k8s-version-546500 kubelet[1892]: E0719 05:19:14.155106    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.431467    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:25 old-k8s-version-546500 kubelet[1892]: E0719 05:19:25.355701    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.433744    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:25 old-k8s-version-546500 kubelet[1892]: E0719 05:19:25.408420    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.434077    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:38 old-k8s-version-546500 kubelet[1892]: E0719 05:19:38.153300    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.434422    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:40 old-k8s-version-546500 kubelet[1892]: E0719 05:19:40.152110    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.435698    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:53 old-k8s-version-546500 kubelet[1892]: E0719 05:19:53.702175    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.436793    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:54 old-k8s-version-546500 kubelet[1892]: E0719 05:19:54.149258    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.436995    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:05 old-k8s-version-546500 kubelet[1892]: E0719 05:20:05.173619    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.439423    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:06 old-k8s-version-546500 kubelet[1892]: E0719 05:20:06.210813    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.439755    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:18 old-k8s-version-546500 kubelet[1892]: E0719 05:20:18.147603    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.439937    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:20 old-k8s-version-546500 kubelet[1892]: E0719 05:20:20.148548    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.439937    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:31 old-k8s-version-546500 kubelet[1892]: E0719 05:20:31.146645    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.440418    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:33 old-k8s-version-546500 kubelet[1892]: E0719 05:20:33.148159    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.440635    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:45 old-k8s-version-546500 kubelet[1892]: E0719 05:20:45.141725    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.441862    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:45 old-k8s-version-546500 kubelet[1892]: E0719 05:20:45.698012    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.442893    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:58 old-k8s-version-546500 kubelet[1892]: E0719 05:20:58.145596    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.442893    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:58 old-k8s-version-546500 kubelet[1892]: E0719 05:20:58.148350    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.442893    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:10 old-k8s-version-546500 kubelet[1892]: E0719 05:21:10.156224    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.442893    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:11 old-k8s-version-546500 kubelet[1892]: E0719 05:21:11.138243    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.443682    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:25 old-k8s-version-546500 kubelet[1892]: E0719 05:21:25.138270    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.443845    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:25 old-k8s-version-546500 kubelet[1892]: E0719 05:21:25.138889    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.444022    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:36 old-k8s-version-546500 kubelet[1892]: E0719 05:21:36.139658    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.445299    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:38 old-k8s-version-546500 kubelet[1892]: E0719 05:21:38.490515    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.445299    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:49 old-k8s-version-546500 kubelet[1892]: E0719 05:21:49.135258    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.446227    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:51 old-k8s-version-546500 kubelet[1892]: E0719 05:21:51.133793    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.446529    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:03 old-k8s-version-546500 kubelet[1892]: E0719 05:22:03.139366    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.446716    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:03 old-k8s-version-546500 kubelet[1892]: E0719 05:22:03.140628    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.447963    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:14 old-k8s-version-546500 kubelet[1892]: E0719 05:22:14.788198    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.448970    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:15 old-k8s-version-546500 kubelet[1892]: E0719 05:22:15.132363    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.449273    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:26 old-k8s-version-546500 kubelet[1892]: E0719 05:22:26.132249    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.449433    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:28 old-k8s-version-546500 kubelet[1892]: E0719 05:22:28.133462    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.449829    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:39 old-k8s-version-546500 kubelet[1892]: E0719 05:22:39.132621    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.450057    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:42 old-k8s-version-546500 kubelet[1892]: E0719 05:22:42.128679    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.450322    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:54 old-k8s-version-546500 kubelet[1892]: E0719 05:22:54.127469    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.450589    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:57 old-k8s-version-546500 kubelet[1892]: E0719 05:22:57.129488    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.450807    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:09 old-k8s-version-546500 kubelet[1892]: E0719 05:23:09.127590    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.451002    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:10 old-k8s-version-546500 kubelet[1892]: E0719 05:23:10.125075    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.451207    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:22 old-k8s-version-546500 kubelet[1892]: E0719 05:23:22.139954    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.451415    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:24 old-k8s-version-546500 kubelet[1892]: E0719 05:23:24.125805    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.451415    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:37 old-k8s-version-546500 kubelet[1892]: E0719 05:23:37.126302    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.452057    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:39 old-k8s-version-546500 kubelet[1892]: E0719 05:23:39.125983    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.452208    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.452409    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.452663    8448 logs.go:138] Found kubelet problem: Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0719 05:24:06.452663    8448 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:24:06.452663    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:24:06.664312    8448 logs.go:123] Gathering logs for kube-apiserver [7d9c9067b30f] ...
	I0719 05:24:06.664345    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d9c9067b30f"
	I0719 05:24:06.748433    8448 logs.go:123] Gathering logs for kube-proxy [4aae10524ed9] ...
	I0719 05:24:06.749425    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aae10524ed9"
	I0719 05:24:06.801309    8448 logs.go:123] Gathering logs for Docker ...
	I0719 05:24:06.801479    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:24:06.850090    8448 logs.go:123] Gathering logs for etcd [5bd4db013300] ...
	I0719 05:24:06.850090    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bd4db013300"
	I0719 05:24:06.915455    8448 logs.go:123] Gathering logs for coredns [59a10474c608] ...
	I0719 05:24:06.916038    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59a10474c608"
	I0719 05:24:06.970623    8448 logs.go:123] Gathering logs for kube-scheduler [95dad6d99e18] ...
	I0719 05:24:06.970623    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95dad6d99e18"
	I0719 05:24:07.019252    8448 logs.go:123] Gathering logs for kubernetes-dashboard [91ec1ba3377b] ...
	I0719 05:24:07.019252    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91ec1ba3377b"
	I0719 05:24:07.069016    8448 logs.go:123] Gathering logs for kube-scheduler [105b39486f2b] ...
	I0719 05:24:07.069016    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 105b39486f2b"
	I0719 05:24:07.120243    8448 logs.go:123] Gathering logs for container status ...
	I0719 05:24:07.120243    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:24:07.225474    8448 logs.go:123] Gathering logs for dmesg ...
	I0719 05:24:07.225474    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:24:07.255386    8448 logs.go:123] Gathering logs for kube-apiserver [891502bd603e] ...
	I0719 05:24:07.255386    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891502bd603e"
	I0719 05:24:07.336592    8448 logs.go:123] Gathering logs for etcd [41eb1254f9cf] ...
	I0719 05:24:07.336592    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41eb1254f9cf"
	I0719 05:24:07.411124    8448 logs.go:123] Gathering logs for coredns [d46bbd6b65c5] ...
	I0719 05:24:07.411124    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d46bbd6b65c5"
	I0719 05:24:07.463496    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:24:07.464099    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 05:24:07.464278    8448 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:23:37 old-k8s-version-546500 kubelet[1892]: E0719 05:23:37.126302    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:23:37 old-k8s-version-546500 kubelet[1892]: E0719 05:23:37.126302    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:23:39 old-k8s-version-546500 kubelet[1892]: E0719 05:23:39.125983    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:23:39 old-k8s-version-546500 kubelet[1892]: E0719 05:23:39.125983    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0719 05:24:07.464278    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:24:07.464278    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:24:17.495125    8448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:24:17.525961    8448 api_server.go:72] duration metric: took 5m52.8270487s to wait for apiserver process to appear ...
	I0719 05:24:17.525961    8448 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:24:17.537899    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:24:17.594412    8448 logs.go:276] 2 containers: [891502bd603e 7d9c9067b30f]
	I0719 05:24:17.604414    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:24:17.652411    8448 logs.go:276] 2 containers: [41eb1254f9cf 5bd4db013300]
	I0719 05:24:17.663413    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:24:17.706432    8448 logs.go:276] 2 containers: [d46bbd6b65c5 59a10474c608]
	I0719 05:24:17.728864    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:24:17.769467    8448 logs.go:276] 2 containers: [95dad6d99e18 105b39486f2b]
	I0719 05:24:17.779844    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:24:17.821386    8448 logs.go:276] 2 containers: [756b5c94abf2 4aae10524ed9]
	I0719 05:24:17.835210    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:24:17.885180    8448 logs.go:276] 2 containers: [e213f42e7fc3 43e043a0349d]
	I0719 05:24:17.896528    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:24:17.942266    8448 logs.go:276] 0 containers: []
	W0719 05:24:17.942266    8448 logs.go:278] No container was found matching "kindnet"
	I0719 05:24:17.956027    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:24:18.001875    8448 logs.go:276] 1 containers: [91ec1ba3377b]
	I0719 05:24:18.013850    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:24:18.064061    8448 logs.go:276] 2 containers: [ccd5bac65a54 cf6d6836594d]
	I0719 05:24:18.064111    8448 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:24:18.064111    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:24:18.267829    8448 logs.go:123] Gathering logs for kube-apiserver [891502bd603e] ...
	I0719 05:24:18.267829    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891502bd603e"
	I0719 05:24:18.343105    8448 logs.go:123] Gathering logs for etcd [41eb1254f9cf] ...
	I0719 05:24:18.343105    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41eb1254f9cf"
	I0719 05:24:18.416510    8448 logs.go:123] Gathering logs for Docker ...
	I0719 05:24:18.416510    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:24:18.463962    8448 logs.go:123] Gathering logs for kube-controller-manager [e213f42e7fc3] ...
	I0719 05:24:18.463962    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e213f42e7fc3"
	I0719 05:24:18.536418    8448 logs.go:123] Gathering logs for kubernetes-dashboard [91ec1ba3377b] ...
	I0719 05:24:18.536418    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91ec1ba3377b"
	I0719 05:24:18.590432    8448 logs.go:123] Gathering logs for storage-provisioner [ccd5bac65a54] ...
	I0719 05:24:18.590432    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd5bac65a54"
	I0719 05:24:18.639296    8448 logs.go:123] Gathering logs for coredns [59a10474c608] ...
	I0719 05:24:18.639854    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59a10474c608"
	I0719 05:24:18.692817    8448 logs.go:123] Gathering logs for kube-scheduler [95dad6d99e18] ...
	I0719 05:24:18.692817    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95dad6d99e18"
	I0719 05:24:18.740961    8448 logs.go:123] Gathering logs for kube-proxy [756b5c94abf2] ...
	I0719 05:24:18.740961    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756b5c94abf2"
	I0719 05:24:18.789978    8448 logs.go:123] Gathering logs for kube-controller-manager [43e043a0349d] ...
	I0719 05:24:18.790054    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43e043a0349d"
	I0719 05:24:18.858749    8448 logs.go:123] Gathering logs for kubelet ...
	I0719 05:24:18.858749    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 05:24:18.931670    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:37 old-k8s-version-546500 kubelet[1892]: E0719 05:18:37.459293    1892 reflector.go:138] object-"kube-system"/"kube-proxy-token-2pc7z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2pc7z" is forbidden: User "system:node:old-k8s-version-546500" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-546500' and this object
	W0719 05:24:18.932720    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:37 old-k8s-version-546500 kubelet[1892]: E0719 05:18:37.460370    1892 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-546500" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-546500' and this object
	W0719 05:24:18.939036    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:43 old-k8s-version-546500 kubelet[1892]: E0719 05:18:43.863000    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.940107    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:44 old-k8s-version-546500 kubelet[1892]: E0719 05:18:44.255969    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.941088    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:45 old-k8s-version-546500 kubelet[1892]: E0719 05:18:45.411157    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.943645    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:00 old-k8s-version-546500 kubelet[1892]: E0719 05:19:00.224923    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.947155    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:03 old-k8s-version-546500 kubelet[1892]: E0719 05:19:03.887468    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.948271    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:04 old-k8s-version-546500 kubelet[1892]: E0719 05:19:04.419175    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.948271    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:05 old-k8s-version-546500 kubelet[1892]: E0719 05:19:05.437479    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.949485    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:06 old-k8s-version-546500 kubelet[1892]: E0719 05:19:06.485419    1892 pod_workers.go:191] Error syncing pod a5922a7c-6975-4659-8506-b800bd24f542 ("storage-provisioner_kube-system(a5922a7c-6975-4659-8506-b800bd24f542)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a5922a7c-6975-4659-8506-b800bd24f542)"
	W0719 05:24:18.949485    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:14 old-k8s-version-546500 kubelet[1892]: E0719 05:19:14.155106    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.952085    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:25 old-k8s-version-546500 kubelet[1892]: E0719 05:19:25.355701    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.955930    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:25 old-k8s-version-546500 kubelet[1892]: E0719 05:19:25.408420    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.956534    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:38 old-k8s-version-546500 kubelet[1892]: E0719 05:19:38.153300    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.957033    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:40 old-k8s-version-546500 kubelet[1892]: E0719 05:19:40.152110    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.958330    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:53 old-k8s-version-546500 kubelet[1892]: E0719 05:19:53.702175    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.959340    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:54 old-k8s-version-546500 kubelet[1892]: E0719 05:19:54.149258    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.959340    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:05 old-k8s-version-546500 kubelet[1892]: E0719 05:20:05.173619    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.961420    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:06 old-k8s-version-546500 kubelet[1892]: E0719 05:20:06.210813    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.961420    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:18 old-k8s-version-546500 kubelet[1892]: E0719 05:20:18.147603    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.962276    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:20 old-k8s-version-546500 kubelet[1892]: E0719 05:20:20.148548    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.962416    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:31 old-k8s-version-546500 kubelet[1892]: E0719 05:20:31.146645    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.962624    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:33 old-k8s-version-546500 kubelet[1892]: E0719 05:20:33.148159    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.962826    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:45 old-k8s-version-546500 kubelet[1892]: E0719 05:20:45.141725    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965004    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:45 old-k8s-version-546500 kubelet[1892]: E0719 05:20:45.698012    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.965235    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:58 old-k8s-version-546500 kubelet[1892]: E0719 05:20:58.145596    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965368    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:58 old-k8s-version-546500 kubelet[1892]: E0719 05:20:58.148350    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965550    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:10 old-k8s-version-546500 kubelet[1892]: E0719 05:21:10.156224    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965764    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:11 old-k8s-version-546500 kubelet[1892]: E0719 05:21:11.138243    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965968    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:25 old-k8s-version-546500 kubelet[1892]: E0719 05:21:25.138270    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.966165    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:25 old-k8s-version-546500 kubelet[1892]: E0719 05:21:25.138889    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.966349    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:36 old-k8s-version-546500 kubelet[1892]: E0719 05:21:36.139658    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.967564    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:38 old-k8s-version-546500 kubelet[1892]: E0719 05:21:38.490515    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.967564    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:49 old-k8s-version-546500 kubelet[1892]: E0719 05:21:49.135258    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.968816    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:51 old-k8s-version-546500 kubelet[1892]: E0719 05:21:51.133793    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.969142    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:03 old-k8s-version-546500 kubelet[1892]: E0719 05:22:03.139366    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.969287    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:03 old-k8s-version-546500 kubelet[1892]: E0719 05:22:03.140628    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.971731    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:14 old-k8s-version-546500 kubelet[1892]: E0719 05:22:14.788198    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.971731    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:15 old-k8s-version-546500 kubelet[1892]: E0719 05:22:15.132363    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972144    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:26 old-k8s-version-546500 kubelet[1892]: E0719 05:22:26.132249    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972330    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:28 old-k8s-version-546500 kubelet[1892]: E0719 05:22:28.133462    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972527    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:39 old-k8s-version-546500 kubelet[1892]: E0719 05:22:39.132621    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972713    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:42 old-k8s-version-546500 kubelet[1892]: E0719 05:22:42.128679    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972915    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:54 old-k8s-version-546500 kubelet[1892]: E0719 05:22:54.127469    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973101    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:57 old-k8s-version-546500 kubelet[1892]: E0719 05:22:57.129488    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973297    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:09 old-k8s-version-546500 kubelet[1892]: E0719 05:23:09.127590    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973481    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:10 old-k8s-version-546500 kubelet[1892]: E0719 05:23:10.125075    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973677    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:22 old-k8s-version-546500 kubelet[1892]: E0719 05:23:22.139954    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973939    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:24 old-k8s-version-546500 kubelet[1892]: E0719 05:23:24.125805    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973939    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:37 old-k8s-version-546500 kubelet[1892]: E0719 05:23:37.126302    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.974629    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:39 old-k8s-version-546500 kubelet[1892]: E0719 05:23:39.125983    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.974770    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.975013    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.975220    8448 logs.go:138] Found kubelet problem: Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.975419    8448 logs.go:138] Found kubelet problem: Jul 19 05:24:07 old-k8s-version-546500 kubelet[1892]: E0719 05:24:07.124063    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.975606    8448 logs.go:138] Found kubelet problem: Jul 19 05:24:13 old-k8s-version-546500 kubelet[1892]: E0719 05:24:13.128990    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0719 05:24:18.975606    8448 logs.go:123] Gathering logs for dmesg ...
	I0719 05:24:18.975606    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:24:19.004077    8448 logs.go:123] Gathering logs for kube-apiserver [7d9c9067b30f] ...
	I0719 05:24:19.004077    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d9c9067b30f"
	I0719 05:24:19.089889    8448 logs.go:123] Gathering logs for coredns [d46bbd6b65c5] ...
	I0719 05:24:19.089889    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d46bbd6b65c5"
	I0719 05:24:19.145795    8448 logs.go:123] Gathering logs for storage-provisioner [cf6d6836594d] ...
	I0719 05:24:19.145836    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6d6836594d"
	I0719 05:24:19.212320    8448 logs.go:123] Gathering logs for etcd [5bd4db013300] ...
	I0719 05:24:19.212522    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bd4db013300"
	I0719 05:24:19.275553    8448 logs.go:123] Gathering logs for kube-scheduler [105b39486f2b] ...
	I0719 05:24:19.276163    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 105b39486f2b"
	I0719 05:24:19.336035    8448 logs.go:123] Gathering logs for kube-proxy [4aae10524ed9] ...
	I0719 05:24:19.336035    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aae10524ed9"
	I0719 05:24:19.383665    8448 logs.go:123] Gathering logs for container status ...
	I0719 05:24:19.383665    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:24:19.483349    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:24:19.483917    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 05:24:19.484071    8448 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 05:24:19.484071    8448 out.go:239]   Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:19.484071    8448 out.go:239]   Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:19.484157    8448 out.go:239]   Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:19.484193    8448 out.go:239]   Jul 19 05:24:07 old-k8s-version-546500 kubelet[1892]: E0719 05:24:07.124063    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:24:07 old-k8s-version-546500 kubelet[1892]: E0719 05:24:07.124063    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:19.484193    8448 out.go:239]   Jul 19 05:24:13 old-k8s-version-546500 kubelet[1892]: E0719 05:24:13.128990    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 19 05:24:13 old-k8s-version-546500 kubelet[1892]: E0719 05:24:13.128990    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0719 05:24:19.484193    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:24:19.484291    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:24:29.508932    8448 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52543/healthz ...
	I0719 05:24:29.527121    8448 api_server.go:279] https://127.0.0.1:52543/healthz returned 200:
	ok
	I0719 05:24:29.531027    8448 out.go:177] 
	W0719 05:24:29.533743    8448 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0719 05:24:29.533849    8448 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0719 05:24:29.533849    8448 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0719 05:24:29.533849    8448 out.go:239] * 
	* 
	W0719 05:24:29.535224    8448 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 05:24:29.537558    8448 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-546500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-546500
helpers_test.go:235: (dbg) docker inspect old-k8s-version-546500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a18c23eb39423e6e406e19177a005cd41ccb7e55bb26964791c43180b45e6fc1",
	        "Created": "2024-07-19T05:12:45.827905274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 873626,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-19T05:17:34.65460303Z",
	            "FinishedAt": "2024-07-19T05:17:28.137682884Z"
	        },
	        "Image": "sha256:7bda27423b38cbebec7632cdf15a8fcb063ff209d17af249e6b3f1fbdb5fa681",
	        "ResolvConfPath": "/var/lib/docker/containers/a18c23eb39423e6e406e19177a005cd41ccb7e55bb26964791c43180b45e6fc1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a18c23eb39423e6e406e19177a005cd41ccb7e55bb26964791c43180b45e6fc1/hostname",
	        "HostsPath": "/var/lib/docker/containers/a18c23eb39423e6e406e19177a005cd41ccb7e55bb26964791c43180b45e6fc1/hosts",
	        "LogPath": "/var/lib/docker/containers/a18c23eb39423e6e406e19177a005cd41ccb7e55bb26964791c43180b45e6fc1/a18c23eb39423e6e406e19177a005cd41ccb7e55bb26964791c43180b45e6fc1-json.log",
	        "Name": "/old-k8s-version-546500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-546500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-546500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6543e37e883797bfb77eede4fc201dafe1d5b2c4a9210097812b95326cb3d76a-init/diff:/var/lib/docker/overlay2/8afef3549fbfde76a8b1d15736e3430a7f83f1f1968778d28daa6047c0f61b28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6543e37e883797bfb77eede4fc201dafe1d5b2c4a9210097812b95326cb3d76a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6543e37e883797bfb77eede4fc201dafe1d5b2c4a9210097812b95326cb3d76a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6543e37e883797bfb77eede4fc201dafe1d5b2c4a9210097812b95326cb3d76a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-546500",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-546500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-546500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-546500",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-546500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c81216f526608673236697c08eb7eccf5b2713fd4c9f2c9619776101f1e4dae7",
	            "SandboxKey": "/var/run/docker/netns/c81216f52660",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52539"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52540"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52541"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52542"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52543"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-546500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "1c89bc746cb4d085e917c86d7dbfcb377068b3d2d9f671a1ddd7b763093388e0",
	                    "EndpointID": "f377d56e9fbd18efb944158301a883cf9e69bb7eb9d78a76911fe727025fa5a6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-546500",
	                        "a18c23eb3942"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-546500 -n old-k8s-version-546500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-546500 -n old-k8s-version-546500: (1.4933908s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-546500 logs -n 25
E0719 05:24:33.816822   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-546500 logs -n 25: (3.1517779s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-546500                              | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                              |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |                   |         |                     |                     |
	|         | --keep-context=false                                   |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |                   |         |                     |                     |
	| image   | embed-certs-561200 image list                          | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:21 UTC |
	| delete  | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	| start   | -p newest-cni-800400 --memory=2200 --alsologtostderr   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:22 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.31.0-beta.0    |                              |                   |         |                     |                     |
	| image   | default-k8s-diff-port-683400                           | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	| image   | no-preload-857600 image list                           | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-857600                                   | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	| delete  | -p no-preload-857600                                   | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:22 UTC | 19 Jul 24 05:22 UTC |
	| delete  | -p no-preload-857600                                   | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:22 UTC | 19 Jul 24 05:22 UTC |
	| addons  | enable metrics-server -p newest-cni-800400             | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:22 UTC | 19 Jul 24 05:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p newest-cni-800400                                   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:22 UTC | 19 Jul 24 05:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-800400                  | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:22 UTC | 19 Jul 24 05:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p newest-cni-800400 --memory=2200 --alsologtostderr   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:22 UTC | 19 Jul 24 05:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.31.0-beta.0    |                              |                   |         |                     |                     |
	| image   | newest-cni-800400 image list                           | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:23 UTC | 19 Jul 24 05:23 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p newest-cni-800400                                   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:23 UTC | 19 Jul 24 05:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p newest-cni-800400                                   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:23 UTC | 19 Jul 24 05:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p newest-cni-800400                                   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:23 UTC | 19 Jul 24 05:23 UTC |
	| delete  | -p newest-cni-800400                                   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:23 UTC | 19 Jul 24 05:23 UTC |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 05:22:46
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 05:22:46.365569     796 out.go:291] Setting OutFile to fd 1944 ...
	I0719 05:22:46.366378     796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:22:46.366378     796 out.go:304] Setting ErrFile to fd 1872...
	I0719 05:22:46.366378     796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:22:46.392791     796 out.go:298] Setting JSON to false
	I0719 05:22:46.395555     796 start.go:129] hostinfo: {"hostname":"minikube3","uptime":183551,"bootTime":1721183014,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 05:22:46.396310     796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 05:22:46.404582     796 out.go:177] * [newest-cni-800400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 05:22:46.408448     796 notify.go:220] Checking for updates...
	I0719 05:22:46.410563     796 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 05:22:46.413519     796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:22:46.417068     796 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 05:22:46.420109     796 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:22:46.423169     796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:22:46.426901     796 config.go:182] Loaded profile config "newest-cni-800400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 05:22:46.427680     796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:22:46.714015     796 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 05:22:46.725144     796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:22:47.092223     796 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:87 SystemTime:2024-07-19 05:22:47.041352322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:22:47.097269     796 out.go:177] * Using the docker driver based on existing profile
	I0719 05:22:47.099845     796 start.go:297] selected driver: docker
	I0719 05:22:47.099928     796 start.go:901] validating driver "docker" against &{Name:newest-cni-800400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:22:47.099928     796 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:22:47.169652     796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:22:47.544555     796 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:87 SystemTime:2024-07-19 05:22:47.490916758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:22:47.545650     796 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 05:22:47.545650     796 cni.go:84] Creating CNI manager for ""
	I0719 05:22:47.545650     796 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 05:22:47.545650     796 start.go:340] cluster config:
	{Name:newest-cni-800400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:22:47.549222     796 out.go:177] * Starting "newest-cni-800400" primary control-plane node in "newest-cni-800400" cluster
	I0719 05:22:47.552808     796 cache.go:121] Beginning downloading kic base image for docker with docker
	I0719 05:22:47.556458     796 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0719 05:22:47.561428     796 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 05:22:47.561428     796 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 05:22:47.561428     796 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 05:22:47.561428     796 cache.go:56] Caching tarball of preloaded images
	I0719 05:22:47.562073     796 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:22:47.562316     796 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 05:22:47.562316     796 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\config.json ...
	W0719 05:22:47.797384     796 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0719 05:22:47.797384     796 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 05:22:47.797384     796 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:22:47.798371     796 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:22:47.798371     796 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 05:22:47.798371     796 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 05:22:47.798371     796 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 05:22:47.798371     796 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 05:22:47.798371     796 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0719 05:22:47.798371     796 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:22:48.326459     796 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0719 05:22:48.326459     796 cache.go:194] Successfully downloaded all kic artifacts
	I0719 05:22:48.326459     796 start.go:360] acquireMachinesLock for newest-cni-800400: {Name:mkdd3b144b2005e1885add254cd0c3cf58c61802 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:22:48.327023     796 start.go:364] duration metric: took 564µs to acquireMachinesLock for "newest-cni-800400"
	I0719 05:22:48.327347     796 start.go:96] Skipping create...Using existing machine configuration
	I0719 05:22:48.327347     796 fix.go:54] fixHost starting: 
	I0719 05:22:48.346088     796 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:22:48.529493     796 fix.go:112] recreateIfNeeded on newest-cni-800400: state=Stopped err=<nil>
	W0719 05:22:48.529493     796 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 05:22:48.532530     796 out.go:177] * Restarting existing docker container for "newest-cni-800400" ...
	I0719 05:22:47.383374    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:49.417690    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:48.544487     796 cli_runner.go:164] Run: docker start newest-cni-800400
	I0719 05:22:49.247690     796 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:22:49.441609     796 kic.go:430] container "newest-cni-800400" state is running.
	I0719 05:22:49.454591     796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800400
	I0719 05:22:49.651158     796 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\config.json ...
	I0719 05:22:49.653162     796 machine.go:94] provisionDockerMachine start ...
	I0719 05:22:49.664169     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:49.875173     796 main.go:141] libmachine: Using SSH client type: native
	I0719 05:22:49.875173     796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52808 <nil> <nil>}
	I0719 05:22:49.875173     796 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:22:49.878204     796 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0719 05:22:51.858916    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:53.869385    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:53.077585     796 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-800400
	
	I0719 05:22:53.077585     796 ubuntu.go:169] provisioning hostname "newest-cni-800400"
	I0719 05:22:53.089267     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:53.292364     796 main.go:141] libmachine: Using SSH client type: native
	I0719 05:22:53.292435     796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52808 <nil> <nil>}
	I0719 05:22:53.292435     796 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-800400 && echo "newest-cni-800400" | sudo tee /etc/hostname
	I0719 05:22:53.495116     796 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-800400
	
	I0719 05:22:53.505118     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:53.696854     796 main.go:141] libmachine: Using SSH client type: native
	I0719 05:22:53.697995     796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52808 <nil> <nil>}
	I0719 05:22:53.697995     796 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800400/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:22:53.869385     796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:22:53.869385     796 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0719 05:22:53.869385     796 ubuntu.go:177] setting up certificates
	I0719 05:22:53.869385     796 provision.go:84] configureAuth start
	I0719 05:22:53.890487     796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800400
	I0719 05:22:54.079465     796 provision.go:143] copyHostCerts
	I0719 05:22:54.079465     796 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:22:54.079465     796 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0719 05:22:54.080383     796 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0719 05:22:54.081196     796 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:22:54.081196     796 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0719 05:22:54.082004     796 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:22:54.083226     796 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:22:54.083226     796 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0719 05:22:54.083694     796 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:22:54.084777     796 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-800400 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-800400]
	I0719 05:22:54.327793     796 provision.go:177] copyRemoteCerts
	I0719 05:22:54.340778     796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:22:54.351785     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:54.565308     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:22:54.705468     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 05:22:54.751528     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 05:22:54.800511     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:22:54.847733     796 provision.go:87] duration metric: took 978.3408ms to configureAuth
	I0719 05:22:54.847733     796 ubuntu.go:193] setting minikube options for container-runtime
	I0719 05:22:54.848799     796 config.go:182] Loaded profile config "newest-cni-800400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 05:22:54.861573     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:55.038255     796 main.go:141] libmachine: Using SSH client type: native
	I0719 05:22:55.040649     796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52808 <nil> <nil>}
	I0719 05:22:55.040785     796 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:22:55.229482     796 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0719 05:22:55.229482     796 ubuntu.go:71] root file system type: overlay
	I0719 05:22:55.229482     796 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:22:55.239845     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:55.427890     796 main.go:141] libmachine: Using SSH client type: native
	I0719 05:22:55.427890     796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52808 <nil> <nil>}
	I0719 05:22:55.427890     796 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:22:55.633348     796 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:22:55.644337     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:55.836182     796 main.go:141] libmachine: Using SSH client type: native
	I0719 05:22:55.836862     796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52808 <nil> <nil>}
	I0719 05:22:55.836862     796 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:22:56.018886     796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:22:56.019598     796 machine.go:97] duration metric: took 6.3663859s to provisionDockerMachine
	I0719 05:22:56.019598     796 start.go:293] postStartSetup for "newest-cni-800400" (driver="docker")
	I0719 05:22:56.019713     796 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:22:56.041551     796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:22:56.051115     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:56.220606     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:22:56.360356     796 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:22:56.367170    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:59.336598    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:22:56.369170     796 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0719 05:22:56.370172     796 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0719 05:22:56.370172     796 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0719 05:22:56.370172     796 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0719 05:22:56.370172     796 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0719 05:22:56.370172     796 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0719 05:22:56.371233     796 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem -> 109722.pem in /etc/ssl/certs
	I0719 05:22:56.385163     796 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:22:56.402199     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem --> /etc/ssl/certs/109722.pem (1708 bytes)
	I0719 05:22:56.452263     796 start.go:296] duration metric: took 432.6619ms for postStartSetup
	I0719 05:22:56.465338     796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 05:22:56.475463     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:56.658385     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:22:56.795600     796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0719 05:22:56.809470     796 fix.go:56] duration metric: took 8.482057s for fixHost
	I0719 05:22:56.809470     796 start.go:83] releasing machines lock for "newest-cni-800400", held for 8.4822865s
	I0719 05:22:56.819909     796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800400
	I0719 05:22:56.991821     796 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:22:57.002820     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:57.004860     796 ssh_runner.go:195] Run: cat /version.json
	I0719 05:22:57.013820     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:22:57.179348     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:22:57.194325     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:22:57.296277     796 ssh_runner.go:195] Run: systemctl --version
	W0719 05:22:57.302294     796 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:22:57.320373     796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:22:57.347731     796 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0719 05:22:57.377748     796 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0719 05:22:57.391436     796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:22:57.413840     796 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 05:22:57.413840     796 start.go:495] detecting cgroup driver to use...
	I0719 05:22:57.413840     796 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 05:22:57.413840     796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0719 05:22:57.417826     796 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0719 05:22:57.417826     796 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 05:22:57.463838     796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0719 05:22:57.497793     796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:22:57.519568     796 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:22:57.531577     796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:22:57.572595     796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:22:57.617147     796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:22:57.653260     796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:22:57.687405     796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:22:57.727603     796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:22:57.772661     796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 05:22:57.811482     796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 05:22:57.844256     796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:22:57.883694     796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:22:57.918426     796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:22:58.083879     796 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:22:58.274341     796 start.go:495] detecting cgroup driver to use...
	I0719 05:22:58.274485     796 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 05:22:58.289216     796 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:22:58.317391     796 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0719 05:22:58.338555     796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:22:58.367546     796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:22:58.417561     796 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:22:58.442581     796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:22:58.462549     796 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0719 05:22:58.559735     796 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:22:58.733423     796 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:22:58.905741     796 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:22:58.905984     796 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:22:58.960808     796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:22:59.131654     796 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:23:01.363120    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:03.370433    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:05.858873    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:09.293350     796 ssh_runner.go:235] Completed: sudo systemctl restart docker: (10.1616163s)
	I0719 05:23:09.311387     796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 05:23:09.357374     796 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 05:23:09.407377     796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:23:09.450636     796 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 05:23:09.638189     796 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 05:23:09.825699     796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:23:10.011611     796 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 05:23:10.060550     796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:23:10.105003     796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:23:10.298868     796 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 05:23:10.498813     796 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 05:23:10.518207     796 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 05:23:10.529820     796 start.go:563] Will wait 60s for crictl version
	I0719 05:23:10.543846     796 ssh_runner.go:195] Run: which crictl
	I0719 05:23:10.567848     796 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:23:10.668603     796 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 05:23:10.678573     796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:23:10.758864     796 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:23:07.875450    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:10.372154    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:10.829677     796 out.go:204] * Preparing Kubernetes v1.31.0-beta.0 on Docker 27.0.3 ...
	I0719 05:23:10.840681     796 cli_runner.go:164] Run: docker exec -t newest-cni-800400 dig +short host.docker.internal
	I0719 05:23:11.151705     796 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0719 05:23:11.164718     796 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0719 05:23:11.176737     796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:23:11.223184     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:23:11.420773     796 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0719 05:23:11.423822     796 kubeadm.go:883] updating cluster {Name:newest-cni-800400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:23:11.424156     796 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 05:23:11.437837     796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:23:11.488686     796 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	registry.k8s.io/kube-proxy:v1.31.0-beta.0
	registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	registry.k8s.io/etcd:3.5.14-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 05:23:11.488855     796 docker.go:615] Images already preloaded, skipping extraction
	I0719 05:23:11.500055     796 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:23:11.550295     796 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	registry.k8s.io/kube-proxy:v1.31.0-beta.0
	registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	registry.k8s.io/etcd:3.5.14-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 05:23:11.551312     796 cache_images.go:84] Images are preloaded, skipping loading
	I0719 05:23:11.551312     796 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.0-beta.0 docker true true} ...
	I0719 05:23:11.551312     796 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-800400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:23:11.561301     796 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 05:23:11.687324     796 cni.go:84] Creating CNI manager for ""
	I0719 05:23:11.687324     796 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 05:23:11.687324     796 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0719 05:23:11.687324     796 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800400 NodeName:newest-cni-800400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 05:23:11.687324     796 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-800400"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:23:11.700336     796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 05:23:11.722323     796 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:23:11.737307     796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:23:11.760450     796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
	I0719 05:23:11.795811     796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 05:23:11.836813     796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0719 05:23:11.893869     796 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0719 05:23:11.902811     796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:23:11.944445     796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:23:12.117527     796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:23:12.156159     796 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400 for IP: 192.168.85.2
	I0719 05:23:12.156159     796 certs.go:194] generating shared ca certs ...
	I0719 05:23:12.156159     796 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:23:12.157156     796 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0719 05:23:12.158184     796 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0719 05:23:12.158184     796 certs.go:256] generating profile certs ...
	I0719 05:23:12.159164     796 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\client.key
	I0719 05:23:12.159164     796 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\apiserver.key.7c45a7c6
	I0719 05:23:12.160161     796 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\proxy-client.key
	I0719 05:23:12.161164     796 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972.pem (1338 bytes)
	W0719 05:23:12.162159     796 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972_empty.pem, impossibly tiny 0 bytes
	I0719 05:23:12.162159     796 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 05:23:12.162159     796 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0719 05:23:12.163162     796 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 05:23:12.163162     796 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 05:23:12.164158     796 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem (1708 bytes)
	I0719 05:23:12.166157     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:23:12.220839     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 05:23:12.270868     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:23:12.355275     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:23:12.434700     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 05:23:12.565431     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 05:23:12.665856     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:23:12.763688     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 05:23:12.855266     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10972.pem --> /usr/share/ca-certificates/10972.pem (1338 bytes)
	I0719 05:23:12.957930     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem --> /usr/share/ca-certificates/109722.pem (1708 bytes)
	I0719 05:23:13.009857     796 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:23:13.065809     796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:23:13.124291     796 ssh_runner.go:195] Run: openssl version
	I0719 05:23:13.155283     796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10972.pem && ln -fs /usr/share/ca-certificates/10972.pem /etc/ssl/certs/10972.pem"
	I0719 05:23:13.193285     796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10972.pem
	I0719 05:23:13.208609     796 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:42 /usr/share/ca-certificates/10972.pem
	I0719 05:23:13.222012     796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10972.pem
	I0719 05:23:13.251967     796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10972.pem /etc/ssl/certs/51391683.0"
	I0719 05:23:13.288721     796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109722.pem && ln -fs /usr/share/ca-certificates/109722.pem /etc/ssl/certs/109722.pem"
	I0719 05:23:13.327898     796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109722.pem
	I0719 05:23:13.337900     796 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:42 /usr/share/ca-certificates/109722.pem
	I0719 05:23:13.351895     796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109722.pem
	I0719 05:23:13.381900     796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109722.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:23:13.422745     796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:23:13.460650     796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:23:13.472724     796 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:23:13.483559     796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:23:13.508553     796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:23:13.549157     796 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:23:13.572274     796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 05:23:13.602377     796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 05:23:13.632357     796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 05:23:13.658345     796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 05:23:13.686262     796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 05:23:13.714906     796 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 05:23:13.731255     796 kubeadm.go:392] StartCluster: {Name:newest-cni-800400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:23:13.742738     796 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 05:23:13.807578     796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 05:23:13.833025     796 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 05:23:13.833025     796 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 05:23:13.845899     796 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 05:23:13.867573     796 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 05:23:13.877914     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:23:14.077932     796 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-800400" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 05:23:14.078632     796 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-800400" cluster setting kubeconfig missing "newest-cni-800400" context setting]
	I0719 05:23:14.079464     796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:23:14.122399     796 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 05:23:14.146438     796 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0719 05:23:14.146438     796 kubeadm.go:597] duration metric: took 313.4106ms to restartPrimaryControlPlane
	I0719 05:23:14.146438     796 kubeadm.go:394] duration metric: took 415.1796ms to StartCluster
	I0719 05:23:14.146438     796 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:23:14.146438     796 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 05:23:14.148407     796 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:23:14.149334     796 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 05:23:14.149334     796 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 05:23:14.149334     796 addons.go:69] Setting metrics-server=true in profile "newest-cni-800400"
	I0719 05:23:14.149334     796 addons.go:69] Setting default-storageclass=true in profile "newest-cni-800400"
	I0719 05:23:14.149334     796 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-800400"
	I0719 05:23:14.149334     796 addons.go:69] Setting dashboard=true in profile "newest-cni-800400"
	I0719 05:23:14.149873     796 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-800400"
	I0719 05:23:14.149873     796 addons.go:234] Setting addon metrics-server=true in "newest-cni-800400"
	I0719 05:23:14.149873     796 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-800400"
	W0719 05:23:14.150024     796 addons.go:243] addon metrics-server should already be in state true
	W0719 05:23:14.150024     796 addons.go:243] addon storage-provisioner should already be in state true
	I0719 05:23:14.150024     796 addons.go:234] Setting addon dashboard=true in "newest-cni-800400"
	W0719 05:23:14.150108     796 addons.go:243] addon dashboard should already be in state true
	I0719 05:23:14.150108     796 config.go:182] Loaded profile config "newest-cni-800400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 05:23:14.150108     796 host.go:66] Checking if "newest-cni-800400" exists ...
	I0719 05:23:14.150327     796 host.go:66] Checking if "newest-cni-800400" exists ...
	I0719 05:23:14.150455     796 host.go:66] Checking if "newest-cni-800400" exists ...
	I0719 05:23:14.158292     796 out.go:177] * Verifying Kubernetes components...
	I0719 05:23:14.178269     796 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:23:14.182216     796 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:23:14.183221     796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:23:14.183221     796 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:23:14.187226     796 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:23:14.391119     796 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:23:14.394116     796 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 05:23:14.398110     796 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:23:14.398110     796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 05:23:14.406120     796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:23:14.407100     796 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 05:23:14.407100     796 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 05:23:14.411103     796 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0719 05:23:14.412284     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:23:14.418142     796 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0719 05:23:12.870275    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:15.938076    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:14.419148     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:23:14.422126     796 addons.go:234] Setting addon default-storageclass=true in "newest-cni-800400"
	W0719 05:23:14.422126     796 addons.go:243] addon default-storageclass should already be in state true
	I0719 05:23:14.422126     796 host.go:66] Checking if "newest-cni-800400" exists ...
	I0719 05:23:14.424141     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0719 05:23:14.424141     796 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0719 05:23:14.435144     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:23:14.451113     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:23:14.453108     796 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:23:14.612121     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:23:14.626150     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:23:14.627126     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:23:14.645169     796 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:23:14.658111     796 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 05:23:14.658111     796 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 05:23:14.658111     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:23:14.669111     796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:23:14.762129     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0719 05:23:14.762200     796 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0719 05:23:14.784588     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:23:14.797097     796 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 05:23:14.797097     796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 05:23:14.823137     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0719 05:23:14.824108     796 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0719 05:23:14.851102     796 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 05:23:14.851102     796 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 05:23:14.862099     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0719 05:23:14.862099     796 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0719 05:23:14.863091     796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52808 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:23:14.932759     796 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:23:14.932759     796 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 05:23:14.948945     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0719 05:23:14.949491     796 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0719 05:23:14.961264     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:14.961264     796 retry.go:31] will retry after 208.659808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.033553     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:23:15.044761     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0719 05:23:15.044761     796 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0719 05:23:15.116379     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0719 05:23:15.116379     796 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0719 05:23:15.163126     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0719 05:23:15.163126     796 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0719 05:23:15.169963     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 05:23:15.175863     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:23:15.192206     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:23:15.239555     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0719 05:23:15.239555     796 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0719 05:23:15.322291     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.322291     796 retry.go:31] will retry after 316.22023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.344883     796 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0719 05:23:15.344883     796 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0719 05:23:15.457942     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0719 05:23:15.464557     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.464557     796 retry.go:31] will retry after 152.640997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 05:23:15.466445     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.466445     796 retry.go:31] will retry after 336.123353ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 05:23:15.605587     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.605587     796 retry.go:31] will retry after 290.933503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.638076     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0719 05:23:15.656406     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:23:15.670158     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0719 05:23:15.763937     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.763937     796 retry.go:31] will retry after 455.361002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 05:23:15.770947     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.770947     796 retry.go:31] will retry after 319.608026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.831413     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:23:15.923153     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0719 05:23:15.957500     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:15.957500     796 retry.go:31] will retry after 629.861001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 05:23:16.042217     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:16.042217     796 retry.go:31] will retry after 423.075466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:16.112630     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:23:16.160319     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:23:16.240386     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0719 05:23:16.242386     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:16.242386     796 retry.go:31] will retry after 360.068074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 05:23:16.354513     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:16.354629     796 retry.go:31] will retry after 692.920926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:18.370918    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:20.879375    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:16.487438     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0719 05:23:16.611991     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:23:16.639944     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:23:16.662249     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:23:17.065403     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0719 05:23:17.333671     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:17.333671     796 retry.go:31] will retry after 431.233658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 05:23:17.528234     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:17.528330     796 retry.go:31] will retry after 841.135619ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0719 05:23:17.529080     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:17.529871     796 retry.go:31] will retry after 575.519925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:17.551414     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0719 05:23:17.747681     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:17.747681     796 retry.go:31] will retry after 1.073328476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:17.761676     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:23:17.779706     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0719 05:23:18.126927     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:18.126927     796 retry.go:31] will retry after 1.221943367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:18.137369     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:23:18.164865     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:23:18.398143     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0719 05:23:18.725184     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:18.725286     796 retry.go:31] will retry after 1.330517556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:18.742184     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:23:18.840956     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0719 05:23:19.216285     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:19.216285     796 retry.go:31] will retry after 1.658270148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:19.240811     796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:23:19.369448     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0719 05:23:19.531258     796 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:19.531258     796 retry.go:31] will retry after 1.328515571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0719 05:23:19.531258     796 api_server.go:72] duration metric: took 5.3818824s to wait for apiserver process to appear ...
	I0719 05:23:19.531258     796 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:23:19.531258     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:19.535796     796 api_server.go:269] stopped: https://127.0.0.1:52812/healthz: Get "https://127.0.0.1:52812/healthz": EOF
	I0719 05:23:20.035211     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:20.078173     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 05:23:20.888416     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0719 05:23:20.896338     796 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:23:23.356373    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:25.372220    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:23.423798     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 05:23:23.423872     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 05:23:23.423872     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:23.725628     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:23.725765     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:23.725765     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:23.826852     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:23.826852     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:24.046458     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:24.126554     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:24.126626     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:24.536338     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:24.628821     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:24.629885     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:25.039795     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:25.116559     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:25.116692     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:25.542929     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:25.631342     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:25.631342     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:26.043810     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:26.126461     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:26.126603     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:26.545905     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:26.627200     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:26.627310     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:27.038048     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:27.125215     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:27.125215     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:27.539471     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:27.630284     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:27.630284     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:28.041308     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:28.124981     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:28.125111     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:28.543243     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:28.629355     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:28.629355     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:29.031783     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:29.122967     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:29.122967     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:29.532962     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:29.623358     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:29.623479     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:30.031769     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:30.116091     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:30.116281     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:30.540886     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:30.625426     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:23:30.625426     796 api_server.go:103] status: https://127.0.0.1:52812/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:23:30.917234     796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.5475822s)
	I0719 05:23:30.917971     796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.8395478s)
	I0719 05:23:30.918199     796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.0297046s)
	I0719 05:23:30.918199     796 addons.go:475] Verifying addon metrics-server=true in "newest-cni-800400"
	I0719 05:23:30.918404     796 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.0217827s)
	I0719 05:23:30.922771     796 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-800400 addons enable metrics-server
	
	I0719 05:23:30.954738     796 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0719 05:23:27.866830    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:30.367459    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:30.960006     796 addons.go:510] duration metric: took 16.8105407s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0719 05:23:31.044397     796 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52812/healthz ...
	I0719 05:23:31.058583     796 api_server.go:279] https://127.0.0.1:52812/healthz returned 200:
	ok
	I0719 05:23:31.075738     796 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 05:23:31.075738     796 api_server.go:131] duration metric: took 11.5443903s to wait for apiserver health ...
	I0719 05:23:31.075738     796 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:23:31.089775     796 system_pods.go:59] 8 kube-system pods found
	I0719 05:23:31.089775     796 system_pods.go:61] "coredns-5cfdc65f69-jxgrx" [cc8c2888-e1cb-4044-9ba5-f809490a5101] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 05:23:31.089775     796 system_pods.go:61] "etcd-newest-cni-800400" [2249b546-8423-41df-abf2-f1b18d6cbbea] Running
	I0719 05:23:31.089775     796 system_pods.go:61] "kube-apiserver-newest-cni-800400" [fb4bedcb-1bdd-4457-abdc-c4f3bc40004a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 05:23:31.089775     796 system_pods.go:61] "kube-controller-manager-newest-cni-800400" [a3bb9b03-92af-4cbc-a17a-2f15b3d2e327] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 05:23:31.089775     796 system_pods.go:61] "kube-proxy-lgtqk" [02ee96fb-be24-4eed-a5ea-cc61e76aa85c] Running
	I0719 05:23:31.089775     796 system_pods.go:61] "kube-scheduler-newest-cni-800400" [2140d9ee-ba89-499f-9cab-040a2e5712e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 05:23:31.089775     796 system_pods.go:61] "metrics-server-78fcd8795b-whkrw" [3beefc79-4fc8-4d03-8e4d-fbb225a84e42] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 05:23:31.089775     796 system_pods.go:61] "storage-provisioner" [395bdaa5-c709-48fa-b062-ff8a302a46b1] Running
	I0719 05:23:31.089775     796 system_pods.go:74] duration metric: took 14.0369ms to wait for pod list to return data ...
	I0719 05:23:31.089775     796 default_sa.go:34] waiting for default service account to be created ...
	I0719 05:23:31.097201     796 default_sa.go:45] found service account: "default"
	I0719 05:23:31.097296     796 default_sa.go:55] duration metric: took 7.4256ms for default service account to be created ...
	I0719 05:23:31.097296     796 kubeadm.go:582] duration metric: took 16.94783s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 05:23:31.097296     796 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:23:31.108751     796 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0719 05:23:31.108751     796 node_conditions.go:123] node cpu capacity is 16
	I0719 05:23:31.108751     796 node_conditions.go:105] duration metric: took 11.4545ms to run NodePressure ...
	I0719 05:23:31.108751     796 start.go:241] waiting for startup goroutines ...
	I0719 05:23:31.108751     796 start.go:246] waiting for cluster config update ...
	I0719 05:23:31.108751     796 start.go:255] writing updated cluster config ...
	I0719 05:23:31.130692     796 ssh_runner.go:195] Run: rm -f paused
	I0719 05:23:31.291292     796 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 05:23:31.296844     796 out.go:177] * Done! kubectl is now configured to use "newest-cni-800400" cluster and "default" namespace by default
	I0719 05:23:32.870799    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:35.363291    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:37.376972    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:39.858778    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:41.866900    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:43.873136    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:46.366634    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:48.371499    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:50.862112    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:53.361071    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:55.367163    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:57.865858    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:23:59.866230    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:24:02.411161    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:24:04.857284    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:24:05.349929    8448 pod_ready.go:81] duration metric: took 4m0.0091187s for pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace to be "Ready" ...
	E0719 05:24:05.350042    8448 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 05:24:05.350042    8448 pod_ready.go:38] duration metric: took 5m27.5904265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:24:05.350126    8448 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:24:05.361122    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:24:05.433091    8448 logs.go:276] 2 containers: [891502bd603e 7d9c9067b30f]
	I0719 05:24:05.445989    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:24:05.534248    8448 logs.go:276] 2 containers: [41eb1254f9cf 5bd4db013300]
	I0719 05:24:05.545202    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:24:05.612606    8448 logs.go:276] 2 containers: [d46bbd6b65c5 59a10474c608]
	I0719 05:24:05.625073    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:24:05.684340    8448 logs.go:276] 2 containers: [95dad6d99e18 105b39486f2b]
	I0719 05:24:05.696662    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:24:05.747977    8448 logs.go:276] 2 containers: [756b5c94abf2 4aae10524ed9]
	I0719 05:24:05.758646    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:24:05.810247    8448 logs.go:276] 2 containers: [e213f42e7fc3 43e043a0349d]
	I0719 05:24:05.819720    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:24:05.868800    8448 logs.go:276] 0 containers: []
	W0719 05:24:05.869380    8448 logs.go:278] No container was found matching "kindnet"
	I0719 05:24:05.881404    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:24:05.929550    8448 logs.go:276] 2 containers: [ccd5bac65a54 cf6d6836594d]
	I0719 05:24:05.940645    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:24:05.990663    8448 logs.go:276] 1 containers: [91ec1ba3377b]
	I0719 05:24:05.990663    8448 logs.go:123] Gathering logs for kube-proxy [756b5c94abf2] ...
	I0719 05:24:05.990663    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756b5c94abf2"
	I0719 05:24:06.043052    8448 logs.go:123] Gathering logs for kube-controller-manager [e213f42e7fc3] ...
	I0719 05:24:06.043052    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e213f42e7fc3"
	I0719 05:24:06.122579    8448 logs.go:123] Gathering logs for kube-controller-manager [43e043a0349d] ...
	I0719 05:24:06.122579    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43e043a0349d"
	I0719 05:24:06.232883    8448 logs.go:123] Gathering logs for storage-provisioner [ccd5bac65a54] ...
	I0719 05:24:06.232883    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd5bac65a54"
	I0719 05:24:06.284477    8448 logs.go:123] Gathering logs for storage-provisioner [cf6d6836594d] ...
	I0719 05:24:06.284477    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6d6836594d"
	I0719 05:24:06.333041    8448 logs.go:123] Gathering logs for kubelet ...
	I0719 05:24:06.333041    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 05:24:06.406469    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:37 old-k8s-version-546500 kubelet[1892]: E0719 05:18:37.459293    1892 reflector.go:138] object-"kube-system"/"kube-proxy-token-2pc7z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2pc7z" is forbidden: User "system:node:old-k8s-version-546500" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-546500' and this object
	W0719 05:24:06.406469    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:37 old-k8s-version-546500 kubelet[1892]: E0719 05:18:37.460370    1892 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-546500" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-546500' and this object
	W0719 05:24:06.415518    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:43 old-k8s-version-546500 kubelet[1892]: E0719 05:18:43.863000    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.417808    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:44 old-k8s-version-546500 kubelet[1892]: E0719 05:18:44.255969    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.419510    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:45 old-k8s-version-546500 kubelet[1892]: E0719 05:18:45.411157    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.421457    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:00 old-k8s-version-546500 kubelet[1892]: E0719 05:19:00.224923    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.425541    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:03 old-k8s-version-546500 kubelet[1892]: E0719 05:19:03.887468    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.426557    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:04 old-k8s-version-546500 kubelet[1892]: E0719 05:19:04.419175    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.426557    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:05 old-k8s-version-546500 kubelet[1892]: E0719 05:19:05.437479    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.427570    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:06 old-k8s-version-546500 kubelet[1892]: E0719 05:19:06.485419    1892 pod_workers.go:191] Error syncing pod a5922a7c-6975-4659-8506-b800bd24f542 ("storage-provisioner_kube-system(a5922a7c-6975-4659-8506-b800bd24f542)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a5922a7c-6975-4659-8506-b800bd24f542)"
	W0719 05:24:06.428126    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:14 old-k8s-version-546500 kubelet[1892]: E0719 05:19:14.155106    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.431467    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:25 old-k8s-version-546500 kubelet[1892]: E0719 05:19:25.355701    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.433744    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:25 old-k8s-version-546500 kubelet[1892]: E0719 05:19:25.408420    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.434077    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:38 old-k8s-version-546500 kubelet[1892]: E0719 05:19:38.153300    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.434422    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:40 old-k8s-version-546500 kubelet[1892]: E0719 05:19:40.152110    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.435698    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:53 old-k8s-version-546500 kubelet[1892]: E0719 05:19:53.702175    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.436793    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:54 old-k8s-version-546500 kubelet[1892]: E0719 05:19:54.149258    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.436995    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:05 old-k8s-version-546500 kubelet[1892]: E0719 05:20:05.173619    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.439423    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:06 old-k8s-version-546500 kubelet[1892]: E0719 05:20:06.210813    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.439755    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:18 old-k8s-version-546500 kubelet[1892]: E0719 05:20:18.147603    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.439937    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:20 old-k8s-version-546500 kubelet[1892]: E0719 05:20:20.148548    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.439937    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:31 old-k8s-version-546500 kubelet[1892]: E0719 05:20:31.146645    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.440418    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:33 old-k8s-version-546500 kubelet[1892]: E0719 05:20:33.148159    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.440635    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:45 old-k8s-version-546500 kubelet[1892]: E0719 05:20:45.141725    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.441862    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:45 old-k8s-version-546500 kubelet[1892]: E0719 05:20:45.698012    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.442893    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:58 old-k8s-version-546500 kubelet[1892]: E0719 05:20:58.145596    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.442893    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:58 old-k8s-version-546500 kubelet[1892]: E0719 05:20:58.148350    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.442893    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:10 old-k8s-version-546500 kubelet[1892]: E0719 05:21:10.156224    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.442893    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:11 old-k8s-version-546500 kubelet[1892]: E0719 05:21:11.138243    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.443682    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:25 old-k8s-version-546500 kubelet[1892]: E0719 05:21:25.138270    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.443845    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:25 old-k8s-version-546500 kubelet[1892]: E0719 05:21:25.138889    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.444022    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:36 old-k8s-version-546500 kubelet[1892]: E0719 05:21:36.139658    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.445299    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:38 old-k8s-version-546500 kubelet[1892]: E0719 05:21:38.490515    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:06.445299    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:49 old-k8s-version-546500 kubelet[1892]: E0719 05:21:49.135258    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.446227    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:51 old-k8s-version-546500 kubelet[1892]: E0719 05:21:51.133793    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.446529    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:03 old-k8s-version-546500 kubelet[1892]: E0719 05:22:03.139366    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.446716    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:03 old-k8s-version-546500 kubelet[1892]: E0719 05:22:03.140628    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.447963    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:14 old-k8s-version-546500 kubelet[1892]: E0719 05:22:14.788198    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:06.448970    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:15 old-k8s-version-546500 kubelet[1892]: E0719 05:22:15.132363    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.449273    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:26 old-k8s-version-546500 kubelet[1892]: E0719 05:22:26.132249    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.449433    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:28 old-k8s-version-546500 kubelet[1892]: E0719 05:22:28.133462    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.449829    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:39 old-k8s-version-546500 kubelet[1892]: E0719 05:22:39.132621    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.450057    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:42 old-k8s-version-546500 kubelet[1892]: E0719 05:22:42.128679    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.450322    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:54 old-k8s-version-546500 kubelet[1892]: E0719 05:22:54.127469    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.450589    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:57 old-k8s-version-546500 kubelet[1892]: E0719 05:22:57.129488    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.450807    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:09 old-k8s-version-546500 kubelet[1892]: E0719 05:23:09.127590    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.451002    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:10 old-k8s-version-546500 kubelet[1892]: E0719 05:23:10.125075    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.451207    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:22 old-k8s-version-546500 kubelet[1892]: E0719 05:23:22.139954    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.451415    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:24 old-k8s-version-546500 kubelet[1892]: E0719 05:23:24.125805    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.451415    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:37 old-k8s-version-546500 kubelet[1892]: E0719 05:23:37.126302    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.452057    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:39 old-k8s-version-546500 kubelet[1892]: E0719 05:23:39.125983    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.452208    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.452409    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:06.452663    8448 logs.go:138] Found kubelet problem: Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0719 05:24:06.452663    8448 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:24:06.452663    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:24:06.664312    8448 logs.go:123] Gathering logs for kube-apiserver [7d9c9067b30f] ...
	I0719 05:24:06.664345    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d9c9067b30f"
	I0719 05:24:06.748433    8448 logs.go:123] Gathering logs for kube-proxy [4aae10524ed9] ...
	I0719 05:24:06.749425    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aae10524ed9"
	I0719 05:24:06.801309    8448 logs.go:123] Gathering logs for Docker ...
	I0719 05:24:06.801479    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:24:06.850090    8448 logs.go:123] Gathering logs for etcd [5bd4db013300] ...
	I0719 05:24:06.850090    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bd4db013300"
	I0719 05:24:06.915455    8448 logs.go:123] Gathering logs for coredns [59a10474c608] ...
	I0719 05:24:06.916038    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59a10474c608"
	I0719 05:24:06.970623    8448 logs.go:123] Gathering logs for kube-scheduler [95dad6d99e18] ...
	I0719 05:24:06.970623    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95dad6d99e18"
	I0719 05:24:07.019252    8448 logs.go:123] Gathering logs for kubernetes-dashboard [91ec1ba3377b] ...
	I0719 05:24:07.019252    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91ec1ba3377b"
	I0719 05:24:07.069016    8448 logs.go:123] Gathering logs for kube-scheduler [105b39486f2b] ...
	I0719 05:24:07.069016    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 105b39486f2b"
	I0719 05:24:07.120243    8448 logs.go:123] Gathering logs for container status ...
	I0719 05:24:07.120243    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:24:07.225474    8448 logs.go:123] Gathering logs for dmesg ...
	I0719 05:24:07.225474    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:24:07.255386    8448 logs.go:123] Gathering logs for kube-apiserver [891502bd603e] ...
	I0719 05:24:07.255386    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891502bd603e"
	I0719 05:24:07.336592    8448 logs.go:123] Gathering logs for etcd [41eb1254f9cf] ...
	I0719 05:24:07.336592    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41eb1254f9cf"
	I0719 05:24:07.411124    8448 logs.go:123] Gathering logs for coredns [d46bbd6b65c5] ...
	I0719 05:24:07.411124    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d46bbd6b65c5"
	I0719 05:24:07.463496    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:24:07.464099    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 05:24:07.464278    8448 out.go:239] X Problems detected in kubelet:
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:23:37 old-k8s-version-546500 kubelet[1892]: E0719 05:23:37.126302    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:23:39 old-k8s-version-546500 kubelet[1892]: E0719 05:23:39.125983    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:07.464278    8448 out.go:239]   Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0719 05:24:07.464278    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:24:07.464278    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:24:17.495125    8448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:24:17.525961    8448 api_server.go:72] duration metric: took 5m52.8270487s to wait for apiserver process to appear ...
	I0719 05:24:17.525961    8448 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:24:17.537899    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:24:17.594412    8448 logs.go:276] 2 containers: [891502bd603e 7d9c9067b30f]
	I0719 05:24:17.604414    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:24:17.652411    8448 logs.go:276] 2 containers: [41eb1254f9cf 5bd4db013300]
	I0719 05:24:17.663413    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:24:17.706432    8448 logs.go:276] 2 containers: [d46bbd6b65c5 59a10474c608]
	I0719 05:24:17.728864    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:24:17.769467    8448 logs.go:276] 2 containers: [95dad6d99e18 105b39486f2b]
	I0719 05:24:17.779844    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:24:17.821386    8448 logs.go:276] 2 containers: [756b5c94abf2 4aae10524ed9]
	I0719 05:24:17.835210    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:24:17.885180    8448 logs.go:276] 2 containers: [e213f42e7fc3 43e043a0349d]
	I0719 05:24:17.896528    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:24:17.942266    8448 logs.go:276] 0 containers: []
	W0719 05:24:17.942266    8448 logs.go:278] No container was found matching "kindnet"
	I0719 05:24:17.956027    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:24:18.001875    8448 logs.go:276] 1 containers: [91ec1ba3377b]
	I0719 05:24:18.013850    8448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:24:18.064061    8448 logs.go:276] 2 containers: [ccd5bac65a54 cf6d6836594d]
	I0719 05:24:18.064111    8448 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:24:18.064111    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:24:18.267829    8448 logs.go:123] Gathering logs for kube-apiserver [891502bd603e] ...
	I0719 05:24:18.267829    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891502bd603e"
	I0719 05:24:18.343105    8448 logs.go:123] Gathering logs for etcd [41eb1254f9cf] ...
	I0719 05:24:18.343105    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41eb1254f9cf"
	I0719 05:24:18.416510    8448 logs.go:123] Gathering logs for Docker ...
	I0719 05:24:18.416510    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:24:18.463962    8448 logs.go:123] Gathering logs for kube-controller-manager [e213f42e7fc3] ...
	I0719 05:24:18.463962    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e213f42e7fc3"
	I0719 05:24:18.536418    8448 logs.go:123] Gathering logs for kubernetes-dashboard [91ec1ba3377b] ...
	I0719 05:24:18.536418    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91ec1ba3377b"
	I0719 05:24:18.590432    8448 logs.go:123] Gathering logs for storage-provisioner [ccd5bac65a54] ...
	I0719 05:24:18.590432    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd5bac65a54"
	I0719 05:24:18.639296    8448 logs.go:123] Gathering logs for coredns [59a10474c608] ...
	I0719 05:24:18.639854    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59a10474c608"
	I0719 05:24:18.692817    8448 logs.go:123] Gathering logs for kube-scheduler [95dad6d99e18] ...
	I0719 05:24:18.692817    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95dad6d99e18"
	I0719 05:24:18.740961    8448 logs.go:123] Gathering logs for kube-proxy [756b5c94abf2] ...
	I0719 05:24:18.740961    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 756b5c94abf2"
	I0719 05:24:18.789978    8448 logs.go:123] Gathering logs for kube-controller-manager [43e043a0349d] ...
	I0719 05:24:18.790054    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43e043a0349d"
	I0719 05:24:18.858749    8448 logs.go:123] Gathering logs for kubelet ...
	I0719 05:24:18.858749    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 05:24:18.931670    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:37 old-k8s-version-546500 kubelet[1892]: E0719 05:18:37.459293    1892 reflector.go:138] object-"kube-system"/"kube-proxy-token-2pc7z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2pc7z" is forbidden: User "system:node:old-k8s-version-546500" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-546500' and this object
	W0719 05:24:18.932720    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:37 old-k8s-version-546500 kubelet[1892]: E0719 05:18:37.460370    1892 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-546500" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-546500' and this object
	W0719 05:24:18.939036    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:43 old-k8s-version-546500 kubelet[1892]: E0719 05:18:43.863000    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.940107    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:44 old-k8s-version-546500 kubelet[1892]: E0719 05:18:44.255969    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.941088    8448 logs.go:138] Found kubelet problem: Jul 19 05:18:45 old-k8s-version-546500 kubelet[1892]: E0719 05:18:45.411157    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.943645    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:00 old-k8s-version-546500 kubelet[1892]: E0719 05:19:00.224923    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.947155    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:03 old-k8s-version-546500 kubelet[1892]: E0719 05:19:03.887468    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.948271    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:04 old-k8s-version-546500 kubelet[1892]: E0719 05:19:04.419175    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.948271    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:05 old-k8s-version-546500 kubelet[1892]: E0719 05:19:05.437479    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.949485    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:06 old-k8s-version-546500 kubelet[1892]: E0719 05:19:06.485419    1892 pod_workers.go:191] Error syncing pod a5922a7c-6975-4659-8506-b800bd24f542 ("storage-provisioner_kube-system(a5922a7c-6975-4659-8506-b800bd24f542)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a5922a7c-6975-4659-8506-b800bd24f542)"
	W0719 05:24:18.949485    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:14 old-k8s-version-546500 kubelet[1892]: E0719 05:19:14.155106    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.952085    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:25 old-k8s-version-546500 kubelet[1892]: E0719 05:19:25.355701    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.955930    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:25 old-k8s-version-546500 kubelet[1892]: E0719 05:19:25.408420    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.956534    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:38 old-k8s-version-546500 kubelet[1892]: E0719 05:19:38.153300    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.957033    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:40 old-k8s-version-546500 kubelet[1892]: E0719 05:19:40.152110    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.958330    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:53 old-k8s-version-546500 kubelet[1892]: E0719 05:19:53.702175    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.959340    8448 logs.go:138] Found kubelet problem: Jul 19 05:19:54 old-k8s-version-546500 kubelet[1892]: E0719 05:19:54.149258    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.959340    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:05 old-k8s-version-546500 kubelet[1892]: E0719 05:20:05.173619    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.961420    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:06 old-k8s-version-546500 kubelet[1892]: E0719 05:20:06.210813    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.961420    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:18 old-k8s-version-546500 kubelet[1892]: E0719 05:20:18.147603    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.962276    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:20 old-k8s-version-546500 kubelet[1892]: E0719 05:20:20.148548    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.962416    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:31 old-k8s-version-546500 kubelet[1892]: E0719 05:20:31.146645    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.962624    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:33 old-k8s-version-546500 kubelet[1892]: E0719 05:20:33.148159    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.962826    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:45 old-k8s-version-546500 kubelet[1892]: E0719 05:20:45.141725    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965004    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:45 old-k8s-version-546500 kubelet[1892]: E0719 05:20:45.698012    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.965235    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:58 old-k8s-version-546500 kubelet[1892]: E0719 05:20:58.145596    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965368    8448 logs.go:138] Found kubelet problem: Jul 19 05:20:58 old-k8s-version-546500 kubelet[1892]: E0719 05:20:58.148350    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965550    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:10 old-k8s-version-546500 kubelet[1892]: E0719 05:21:10.156224    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965764    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:11 old-k8s-version-546500 kubelet[1892]: E0719 05:21:11.138243    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.965968    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:25 old-k8s-version-546500 kubelet[1892]: E0719 05:21:25.138270    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.966165    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:25 old-k8s-version-546500 kubelet[1892]: E0719 05:21:25.138889    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.966349    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:36 old-k8s-version-546500 kubelet[1892]: E0719 05:21:36.139658    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.967564    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:38 old-k8s-version-546500 kubelet[1892]: E0719 05:21:38.490515    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0719 05:24:18.967564    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:49 old-k8s-version-546500 kubelet[1892]: E0719 05:21:49.135258    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.968816    8448 logs.go:138] Found kubelet problem: Jul 19 05:21:51 old-k8s-version-546500 kubelet[1892]: E0719 05:21:51.133793    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.969142    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:03 old-k8s-version-546500 kubelet[1892]: E0719 05:22:03.139366    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.969287    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:03 old-k8s-version-546500 kubelet[1892]: E0719 05:22:03.140628    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.971731    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:14 old-k8s-version-546500 kubelet[1892]: E0719 05:22:14.788198    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0719 05:24:18.971731    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:15 old-k8s-version-546500 kubelet[1892]: E0719 05:22:15.132363    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972144    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:26 old-k8s-version-546500 kubelet[1892]: E0719 05:22:26.132249    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972330    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:28 old-k8s-version-546500 kubelet[1892]: E0719 05:22:28.133462    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972527    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:39 old-k8s-version-546500 kubelet[1892]: E0719 05:22:39.132621    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972713    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:42 old-k8s-version-546500 kubelet[1892]: E0719 05:22:42.128679    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.972915    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:54 old-k8s-version-546500 kubelet[1892]: E0719 05:22:54.127469    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973101    8448 logs.go:138] Found kubelet problem: Jul 19 05:22:57 old-k8s-version-546500 kubelet[1892]: E0719 05:22:57.129488    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973297    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:09 old-k8s-version-546500 kubelet[1892]: E0719 05:23:09.127590    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973481    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:10 old-k8s-version-546500 kubelet[1892]: E0719 05:23:10.125075    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973677    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:22 old-k8s-version-546500 kubelet[1892]: E0719 05:23:22.139954    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973939    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:24 old-k8s-version-546500 kubelet[1892]: E0719 05:23:24.125805    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.973939    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:37 old-k8s-version-546500 kubelet[1892]: E0719 05:23:37.126302    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.974629    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:39 old-k8s-version-546500 kubelet[1892]: E0719 05:23:39.125983    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.974770    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.975013    8448 logs.go:138] Found kubelet problem: Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.975220    8448 logs.go:138] Found kubelet problem: Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.975419    8448 logs.go:138] Found kubelet problem: Jul 19 05:24:07 old-k8s-version-546500 kubelet[1892]: E0719 05:24:07.124063    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:18.975606    8448 logs.go:138] Found kubelet problem: Jul 19 05:24:13 old-k8s-version-546500 kubelet[1892]: E0719 05:24:13.128990    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0719 05:24:18.975606    8448 logs.go:123] Gathering logs for dmesg ...
	I0719 05:24:18.975606    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:24:19.004077    8448 logs.go:123] Gathering logs for kube-apiserver [7d9c9067b30f] ...
	I0719 05:24:19.004077    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d9c9067b30f"
	I0719 05:24:19.089889    8448 logs.go:123] Gathering logs for coredns [d46bbd6b65c5] ...
	I0719 05:24:19.089889    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d46bbd6b65c5"
	I0719 05:24:19.145795    8448 logs.go:123] Gathering logs for storage-provisioner [cf6d6836594d] ...
	I0719 05:24:19.145836    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6d6836594d"
	I0719 05:24:19.212320    8448 logs.go:123] Gathering logs for etcd [5bd4db013300] ...
	I0719 05:24:19.212522    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bd4db013300"
	I0719 05:24:19.275553    8448 logs.go:123] Gathering logs for kube-scheduler [105b39486f2b] ...
	I0719 05:24:19.276163    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 105b39486f2b"
	I0719 05:24:19.336035    8448 logs.go:123] Gathering logs for kube-proxy [4aae10524ed9] ...
	I0719 05:24:19.336035    8448 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aae10524ed9"
	I0719 05:24:19.383665    8448 logs.go:123] Gathering logs for container status ...
	I0719 05:24:19.383665    8448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:24:19.483349    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:24:19.483917    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 05:24:19.484071    8448 out.go:239] X Problems detected in kubelet:
	W0719 05:24:19.484071    8448 out.go:239]   Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:19.484071    8448 out.go:239]   Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:19.484157    8448 out.go:239]   Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0719 05:24:19.484193    8448 out.go:239]   Jul 19 05:24:07 old-k8s-version-546500 kubelet[1892]: E0719 05:24:07.124063    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0719 05:24:19.484193    8448 out.go:239]   Jul 19 05:24:13 old-k8s-version-546500 kubelet[1892]: E0719 05:24:13.128990    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0719 05:24:19.484193    8448 out.go:304] Setting ErrFile to fd 1480...
	I0719 05:24:19.484291    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:24:29.508932    8448 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52543/healthz ...
	I0719 05:24:29.527121    8448 api_server.go:279] https://127.0.0.1:52543/healthz returned 200:
	ok
	I0719 05:24:29.531027    8448 out.go:177] 
	W0719 05:24:29.533743    8448 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0719 05:24:29.533849    8448 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0719 05:24:29.533849    8448 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0719 05:24:29.533849    8448 out.go:239] * 
	W0719 05:24:29.535224    8448 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 05:24:29.537558    8448 out.go:177] 
	
	
	==> Docker <==
	Jul 19 05:19:25 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:19:25.343024554Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:19:25 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:19:25.395551302Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:25 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:19:25.395710322Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:25 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:19:25.406638326Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:53 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:19:53.440668785Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:53 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:19:53.689192902Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:53 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:19:53.689678163Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:53 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:19:53.689827382Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:20:06 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:20:06.196739849Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:06 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:20:06.196890068Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:06 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:20:06.208966984Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:45 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:20:45.445858396Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:45 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:20:45.685789006Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:45 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:20:45.686044739Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:45 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:20:45.686112448Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:21:37 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:21:37.771853216Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:21:37 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:21:37.772013136Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:21:38 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:21:38.488796627Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:22:14 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:22:14.529344006Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:22:14 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:22:14.772483225Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:22:14 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:22:14.772748058Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:22:14 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:22:14.772790063Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:24:20 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:24:20.183146581Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:24:20 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:24:20.183306908Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:24:20 old-k8s-version-546500 dockerd[1457]: time="2024-07-19T05:24:20.195250971Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	91ec1ba3377bd       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   0272e307a73ff       kubernetes-dashboard-cd95d586-qbzb4
	ccd5bac65a54e       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   c87013b514641       storage-provisioner
	d46bbd6b65c58       bfe3a36ebd252                                                                                         5 minutes ago       Running             coredns                   1                   d551e09874a68       coredns-74ff55c5b-9mmxg
	25a5196f3c697       56cc512116c8f                                                                                         5 minutes ago       Running             busybox                   1                   8552796af855d       busybox
	cf6d6836594d5       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   c87013b514641       storage-provisioner
	756b5c94abf25       10cc881966cfd                                                                                         5 minutes ago       Running             kube-proxy                1                   b74f15b043835       kube-proxy-sgsqg
	e213f42e7fc3f       b9fa1895dcaa6                                                                                         6 minutes ago       Running             kube-controller-manager   1                   db50c7078652a       kube-controller-manager-old-k8s-version-546500
	95dad6d99e183       3138b6e3d4712                                                                                         6 minutes ago       Running             kube-scheduler            1                   5912307c01cf1       kube-scheduler-old-k8s-version-546500
	41eb1254f9cf9       0369cf4303ffd                                                                                         6 minutes ago       Running             etcd                      1                   068f9620d9a3b       etcd-old-k8s-version-546500
	891502bd603e1       ca9843d3b5454                                                                                         6 minutes ago       Running             kube-apiserver            1                   556d469715d5e       kube-apiserver-old-k8s-version-546500
	ff3346628df36       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              busybox                   0                   f735eba272f7d       busybox
	59a10474c6088       bfe3a36ebd252                                                                                         9 minutes ago       Exited              coredns                   0                   140e7c6e3cbb8       coredns-74ff55c5b-9mmxg
	4aae10524ed97       10cc881966cfd                                                                                         9 minutes ago       Exited              kube-proxy                0                   62562f64dd119       kube-proxy-sgsqg
	43e043a0349da       b9fa1895dcaa6                                                                                         9 minutes ago       Exited              kube-controller-manager   0                   86be881756b94       kube-controller-manager-old-k8s-version-546500
	7d9c9067b30ff       ca9843d3b5454                                                                                         9 minutes ago       Exited              kube-apiserver            0                   4d199da66461f       kube-apiserver-old-k8s-version-546500
	5bd4db0133005       0369cf4303ffd                                                                                         9 minutes ago       Exited              etcd                      0                   5ad85477f2048       etcd-old-k8s-version-546500
	105b39486f2ba       3138b6e3d4712                                                                                         9 minutes ago       Exited              kube-scheduler            0                   1ae99635a0747       kube-scheduler-old-k8s-version-546500
	
	
	==> coredns [59a10474c608] <==
	I0719 05:15:33.950470       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-19 05:15:12.918450992 +0000 UTC m=+0.129502776) (total time: 21.03187933s):
	Trace[2019727887]: [21.03187933s] [21.03187933s] END
	E0719 05:15:33.950585       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0719 05:15:33.950586       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-19 05:15:12.918426289 +0000 UTC m=+0.129477973) (total time: 21.032122062s):
	Trace[1427131847]: [21.032122062s] [21.032122062s] END
	E0719 05:15:33.950746       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0719 05:15:33.950586       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-19 05:15:12.91868662 +0000 UTC m=+0.129738404) (total time: 21.031877933s):
	Trace[939984059]: [21.031877933s] [21.031877933s] END
	E0719 05:15:33.950789       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52037 - 5095 "HINFO IN 608818452215256610.7561862652184248935. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.075054989s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d46bbd6b65c5] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:55178 - 560 "HINFO IN 8888809535510850875.3102686585685800756. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032866585s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-546500
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-546500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=old-k8s-version-546500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T05_14_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 05:14:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-546500
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 05:24:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 05:19:59 +0000   Fri, 19 Jul 2024 05:14:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 05:19:59 +0000   Fri, 19 Jul 2024 05:14:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 05:19:59 +0000   Fri, 19 Jul 2024 05:14:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 05:19:59 +0000   Fri, 19 Jul 2024 05:15:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-546500
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 5cb7ad82de5b4d739b0e2108dacd498f
	  System UUID:                5cb7ad82de5b4d739b0e2108dacd498f
	  Boot ID:                    732c1326-1f28-4b90-a5e2-449115b83eea
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 coredns-74ff55c5b-9mmxg                           100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     9m25s
	  kube-system                 etcd-old-k8s-version-546500                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-apiserver-old-k8s-version-546500             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-controller-manager-old-k8s-version-546500    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-proxy-sgsqg                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	  kube-system                 kube-scheduler-old-k8s-version-546500             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 metrics-server-9975d5f86-mzhrf                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-pbbzz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-qbzb4               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  9m57s (x4 over 9m57s)  kubelet     Node old-k8s-version-546500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m57s (x5 over 9m57s)  kubelet     Node old-k8s-version-546500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m57s (x4 over 9m57s)  kubelet     Node old-k8s-version-546500 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m40s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m40s                  kubelet     Node old-k8s-version-546500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m40s                  kubelet     Node old-k8s-version-546500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m40s                  kubelet     Node old-k8s-version-546500 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m39s                  kubelet     Node old-k8s-version-546500 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m29s                  kubelet     Node old-k8s-version-546500 status is now: NodeReady
	  Normal  Starting                 9m21s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     6m10s (x7 over 6m10s)  kubelet     Node old-k8s-version-546500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m10s)   kubelet     Node old-k8s-version-546500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m10s)   kubelet     Node old-k8s-version-546500 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m49s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jul19 04:58] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul19 05:01] tmpfs: Unknown parameter 'noswap'
	[  +6.617016] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:05] tmpfs: Unknown parameter 'noswap'
	[ +13.843159] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:14] tmpfs: Unknown parameter 'noswap'
	[ +12.988281] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:16] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:21] tmpfs: Unknown parameter 'noswap'
	[ +12.389167] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:22] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:23] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [41eb1254f9cf] <==
	2024-07-19 05:22:51.681205 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:22:59.322413 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-mzhrf\" " with result "range_response_count:1 size:4052" took too long (470.545617ms) to execute
	2024-07-19 05:22:59.775670 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (296.670303ms) to execute
	2024-07-19 05:22:59.776045 W | etcdserver: request "header:<ID:15638345950888933158 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/old-k8s-version-546500\" mod_revision:983 > success:<request_put:<key:\"/registry/leases/kube-node-lease/old-k8s-version-546500\" value_size:578 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/old-k8s-version-546500\" > >>" with result "size:16" took too long (293.744729ms) to execute
	2024-07-19 05:23:01.093919 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-mzhrf\" " with result "range_response_count:1 size:4052" took too long (244.719765ms) to execute
	2024-07-19 05:23:01.680303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:23:04.417534 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (157.757055ms) to execute
	2024-07-19 05:23:05.071937 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-mzhrf\" " with result "range_response_count:1 size:4052" took too long (216.381245ms) to execute
	2024-07-19 05:23:06.971405 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-mzhrf\" " with result "range_response_count:1 size:4052" took too long (125.105149ms) to execute
	2024-07-19 05:23:09.136956 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-mzhrf\" " with result "range_response_count:1 size:4052" took too long (282.80377ms) to execute
	2024-07-19 05:23:11.676910 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:23:14.802409 W | etcdserver: request "header:<ID:15638345950888933275 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:993 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:67 lease:6414973914034157465 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>" with result "size:16" took too long (302.56524ms) to execute
	2024-07-19 05:23:14.803010 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-mzhrf\" " with result "range_response_count:1 size:4052" took too long (449.928269ms) to execute
	2024-07-19 05:23:14.804344 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (400.933842ms) to execute
	2024-07-19 05:23:14.804669 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (353.845053ms) to execute
	2024-07-19 05:23:15.923248 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (521.756552ms) to execute
	2024-07-19 05:23:15.923638 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-mzhrf\" " with result "range_response_count:1 size:4052" took too long (1.064253198s) to execute
	2024-07-19 05:23:21.676943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:23:31.677861 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:23:41.674013 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:23:51.695678 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:24:01.673205 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:24:11.669933 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:24:21.672957 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:24:31.669329 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [5bd4db013300] <==
	2024-07-19 05:16:29.623591 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (142.478034ms) to execute
	2024-07-19 05:16:39.464373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:16:48.475380 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.997588779s) to execute
	2024-07-19 05:16:48.476640 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1120" took too long (228.825054ms) to execute
	2024-07-19 05:16:48.476675 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (1.172661872s) to execute
	2024-07-19 05:16:48.476705 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.112739007s) to execute
	2024-07-19 05:16:48.476959 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (448.313ms) to execute
	2024-07-19 05:16:48.476998 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.271129733s) to execute
	2024-07-19 05:16:48.477402 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.312900046s) to execute
	2024-07-19 05:16:49.323956 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (836.447164ms) to execute
	2024-07-19 05:16:49.324018 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (440.582461ms) to execute
	2024-07-19 05:16:49.324149 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (828.965593ms) to execute
	2024-07-19 05:16:49.324248 W | etcdserver: read-only range request "key:\"/registry/flowschemas/exempt\" " with result "range_response_count:1 size:879" took too long (407.832417ms) to execute
	2024-07-19 05:16:49.453068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:16:49.707774 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (231.768627ms) to execute
	2024-07-19 05:16:50.890558 W | etcdserver: read-only range request "key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true " with result "range_response_count:0 size:5" took too long (124.832288ms) to execute
	2024-07-19 05:16:50.890835 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (217.563862ms) to execute
	2024-07-19 05:16:51.189594 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (199.847336ms) to execute
	2024-07-19 05:16:59.453937 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:17:09.452891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-19 05:17:13.241149 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (370.130736ms) to execute
	2024-07-19 05:17:16.860709 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/07/19 05:17:16 grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2024/07/19 05:17:16 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2024-07-19 05:17:16.869074 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> kernel <==
	 05:24:34 up 2 days,  3:00,  0 users,  load average: 4.36, 6.31, 7.44
	Linux old-k8s-version-546500 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [7d9c9067b30f] <==
	W0719 05:17:16.974758       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:16.974765       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:16.975325       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0719 05:17:16.976133       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0719 05:17:16.978505       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0719 05:17:16.979952       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:16.980548       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:16.980926       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0719 05:17:16.982055       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0719 05:17:17.056848       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.057315       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.057454       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.057725       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.057740       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.057766       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0719 05:17:17.058008       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0719 05:17:17.058305       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.058431       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0719 05:17:17.058508       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0719 05:17:17.058707       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0719 05:17:17.058729       1 secure_serving.go:241] Stopped listening on [::]:8443
	W0719 05:17:17.059150       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.059560       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.062082       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0719 05:17:17.065974       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [891502bd603e] <==
	I0719 05:21:49.471840       1 client.go:360] parsed scheme: "passthrough"
	I0719 05:21:49.472049       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0719 05:21:49.472067       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0719 05:22:22.181427       1 client.go:360] parsed scheme: "passthrough"
	I0719 05:22:22.181573       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0719 05:22:22.181593       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0719 05:22:55.166608       1 client.go:360] parsed scheme: "passthrough"
	I0719 05:22:55.166796       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0719 05:22:55.166817       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0719 05:23:14.804136       1 trace.go:205] Trace[1400714785]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (19-Jul-2024 05:23:14.239) (total time: 564ms):
	Trace[1400714785]: ---"Transaction committed" 560ms (05:23:00.804)
	Trace[1400714785]: [564.323175ms] [564.323175ms] END
	I0719 05:23:15.925921       1 trace.go:205] Trace[849586014]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-mzhrf,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,client:192.168.76.1 (19-Jul-2024 05:23:14.857) (total time: 1068ms):
	Trace[849586014]: ---"About to write a response" 1067ms (05:23:00.924)
	Trace[849586014]: [1.068350011s] [1.068350011s] END
	I0719 05:23:28.818778       1 client.go:360] parsed scheme: "passthrough"
	I0719 05:23:28.819165       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0719 05:23:28.819185       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0719 05:23:38.426481       1 handler_proxy.go:102] no RequestInfo found in the context
	E0719 05:23:38.426796       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 05:23:38.426830       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 05:24:10.589534       1 client.go:360] parsed scheme: "passthrough"
	I0719 05:24:10.589679       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0719 05:24:10.589692       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [43e043a0349d] <==
	I0719 05:15:07.889240       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0719 05:15:07.888750       1 shared_informer.go:247] Caches are synced for expand 
	I0719 05:15:07.974830       1 shared_informer.go:247] Caches are synced for attach detach 
	I0719 05:15:07.974912       1 shared_informer.go:247] Caches are synced for PV protection 
	I0719 05:15:07.981022       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0719 05:15:07.981051       1 shared_informer.go:247] Caches are synced for resource quota 
	I0719 05:15:07.982483       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sgsqg"
	I0719 05:15:08.074620       1 shared_informer.go:247] Caches are synced for resource quota 
	I0719 05:15:08.074641       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0719 05:15:08.077443       1 shared_informer.go:247] Caches are synced for disruption 
	I0719 05:15:08.077619       1 disruption.go:339] Sending events to api server.
	I0719 05:15:08.087797       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5krgk"
	I0719 05:15:08.190573       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0719 05:15:08.280912       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9mmxg"
	E0719 05:15:08.477309       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"46e7e87a-38fd-4b2b-860d-9d0d12647dc0", ResourceVersion:"248", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63856962892, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0012301e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001230220)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc001230260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000f6dcc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001230
280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0012302a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0012302e0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001dfc600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00122a1b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003d82a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00132c2b8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00122a208)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0719 05:15:08.574267       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0719 05:15:08.574307       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0719 05:15:08.592825       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0719 05:15:11.178496       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0719 05:15:11.276245       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-5krgk"
	I0719 05:17:14.384949       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0719 05:17:14.400132       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0719 05:17:14.409966       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0719 05:17:14.559871       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0719 05:17:15.428298       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-mzhrf"
	
	
	==> kube-controller-manager [e213f42e7fc3] <==
	W0719 05:20:07.156357       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0719 05:20:33.161006       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0719 05:20:38.803518       1 request.go:655] Throttling request took 1.047464296s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0719 05:20:39.656563       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0719 05:21:03.662797       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0719 05:21:11.304612       1 request.go:655] Throttling request took 1.046721326s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0719 05:21:12.156506       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0719 05:21:34.163698       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0719 05:21:43.804089       1 request.go:655] Throttling request took 1.046561118s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0719 05:21:44.661891       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0719 05:22:04.665232       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0719 05:22:16.309930       1 request.go:655] Throttling request took 1.047121572s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0719 05:22:17.162474       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0719 05:22:35.167463       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0719 05:22:48.810785       1 request.go:655] Throttling request took 1.044573154s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0719 05:22:49.662986       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0719 05:23:05.667868       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0719 05:23:21.310888       1 request.go:655] Throttling request took 1.047348384s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0719 05:23:22.162978       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0719 05:23:36.168272       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0719 05:23:53.809289       1 request.go:655] Throttling request took 1.044469117s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0719 05:23:54.661070       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0719 05:24:06.667995       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0719 05:24:26.309140       1 request.go:655] Throttling request took 1.047488171s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0719 05:24:27.161582       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [4aae10524ed9] <==
	W0719 05:15:12.605135       1 proxier.go:651] Failed to read file /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:15:12.610037       1 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:15:12.614354       1 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:15:12.619159       1 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:15:12.674321       1 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:15:12.678201       1 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	I0719 05:15:12.704751       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0719 05:15:12.704943       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0719 05:15:12.801149       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0719 05:15:12.801391       1 server_others.go:185] Using iptables Proxier.
	I0719 05:15:12.802104       1 server.go:650] Version: v1.20.0
	I0719 05:15:12.808998       1 config.go:315] Starting service config controller
	I0719 05:15:12.809118       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0719 05:15:12.809189       1 config.go:224] Starting endpoint slice config controller
	I0719 05:15:12.809363       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0719 05:15:12.910105       1 shared_informer.go:247] Caches are synced for service config 
	I0719 05:15:12.910329       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [756b5c94abf2] <==
	W0719 05:18:43.857318       1 proxier.go:651] Failed to read file /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:18:43.862985       1 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:18:43.866853       1 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:18:43.870718       1 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:18:43.945861       1 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0719 05:18:43.951117       1 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	I0719 05:18:44.075689       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0719 05:18:44.075858       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0719 05:18:44.350596       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0719 05:18:44.351035       1 server_others.go:185] Using iptables Proxier.
	I0719 05:18:44.352134       1 server.go:650] Version: v1.20.0
	I0719 05:18:44.353229       1 config.go:315] Starting service config controller
	I0719 05:18:44.353371       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0719 05:18:44.353859       1 config.go:224] Starting endpoint slice config controller
	I0719 05:18:44.353988       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0719 05:18:44.453868       1 shared_informer.go:247] Caches are synced for service config 
	I0719 05:18:44.454459       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [105b39486f2b] <==
	I0719 05:14:48.474864       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 05:14:48.475813       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0719 05:14:48.476575       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0719 05:14:48.481604       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 05:14:48.481824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 05:14:48.482590       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 05:14:48.483752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 05:14:48.484481       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 05:14:48.485397       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 05:14:48.485765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 05:14:48.486305       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 05:14:48.486313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 05:14:48.486364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 05:14:48.486476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 05:14:48.492722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 05:14:49.301618       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 05:14:49.324143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 05:14:49.444398       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 05:14:49.631268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 05:14:49.688105       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 05:14:49.707487       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 05:14:49.768901       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 05:14:49.817725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 05:14:49.983132       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 05:14:52.675178       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [95dad6d99e18] <==
	I0719 05:18:30.757068       1 serving.go:331] Generated self-signed cert in-memory
	W0719 05:18:37.558736       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 05:18:37.559183       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 05:18:37.559231       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 05:18:37.559242       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 05:18:37.853291       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0719 05:18:37.853996       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 05:18:37.854013       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 05:18:37.854059       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0719 05:18:38.054853       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jul 19 05:22:26 old-k8s-version-546500 kubelet[1892]: E0719 05:22:26.132249    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 19 05:22:28 old-k8s-version-546500 kubelet[1892]: E0719 05:22:28.133462    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:22:39 old-k8s-version-546500 kubelet[1892]: E0719 05:22:39.132621    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 19 05:22:42 old-k8s-version-546500 kubelet[1892]: E0719 05:22:42.128679    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:22:54 old-k8s-version-546500 kubelet[1892]: E0719 05:22:54.127469    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 19 05:22:57 old-k8s-version-546500 kubelet[1892]: E0719 05:22:57.129488    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:23:09 old-k8s-version-546500 kubelet[1892]: E0719 05:23:09.127590    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 19 05:23:10 old-k8s-version-546500 kubelet[1892]: E0719 05:23:10.125075    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:23:22 old-k8s-version-546500 kubelet[1892]: E0719 05:23:22.139954    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:23:23 old-k8s-version-546500 kubelet[1892]: W0719 05:23:23.156825    1892 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jul 19 05:23:23 old-k8s-version-546500 kubelet[1892]: W0719 05:23:23.158206    1892 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	Jul 19 05:23:24 old-k8s-version-546500 kubelet[1892]: E0719 05:23:24.125805    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 19 05:23:37 old-k8s-version-546500 kubelet[1892]: E0719 05:23:37.126302    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:23:39 old-k8s-version-546500 kubelet[1892]: E0719 05:23:39.125983    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 19 05:23:49 old-k8s-version-546500 kubelet[1892]: E0719 05:23:49.119898    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:23:53 old-k8s-version-546500 kubelet[1892]: E0719 05:23:53.122456    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 19 05:24:01 old-k8s-version-546500 kubelet[1892]: E0719 05:24:01.120229    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:24:07 old-k8s-version-546500 kubelet[1892]: E0719 05:24:07.124063    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 19 05:24:13 old-k8s-version-546500 kubelet[1892]: E0719 05:24:13.128990    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:24:20 old-k8s-version-546500 kubelet[1892]: E0719 05:24:20.196888    1892 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Jul 19 05:24:20 old-k8s-version-546500 kubelet[1892]: E0719 05:24:20.197026    1892 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Jul 19 05:24:20 old-k8s-version-546500 kubelet[1892]: E0719 05:24:20.197176    1892 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-tqmlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-mzhrf_kube-system(f54933
61-1398-4308-8bb5-2f3d688c34b4): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Jul 19 05:24:20 old-k8s-version-546500 kubelet[1892]: E0719 05:24:20.197214    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:24:28 old-k8s-version-546500 kubelet[1892]: E0719 05:24:28.133706    1892 pod_workers.go:191] Error syncing pod cbeb6671-dc80-46c3-9590-f44c37efa55a ("dashboard-metrics-scraper-8d5bb5db8-pbbzz_kubernetes-dashboard(cbeb6671-dc80-46c3-9590-f44c37efa55a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 19 05:24:31 old-k8s-version-546500 kubelet[1892]: E0719 05:24:31.120332    1892 pod_workers.go:191] Error syncing pod f5493361-1398-4308-8bb5-2f3d688c34b4 ("metrics-server-9975d5f86-mzhrf_kube-system(f5493361-1398-4308-8bb5-2f3d688c34b4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [91ec1ba3377b] <==
	2024/07/19 05:19:25 Starting overwatch
	2024/07/19 05:19:25 Using namespace: kubernetes-dashboard
	2024/07/19 05:19:25 Using in-cluster config to connect to apiserver
	2024/07/19 05:19:25 Using secret token for csrf signing
	2024/07/19 05:19:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/19 05:19:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/19 05:19:25 Successful initial request to the apiserver, version: v1.20.0
	2024/07/19 05:19:25 Generating JWE encryption key
	2024/07/19 05:19:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/19 05:19:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/19 05:19:25 Initializing JWE encryption key from synchronized object
	2024/07/19 05:19:25 Creating in-cluster Sidecar client
	2024/07/19 05:19:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:19:25 Serving insecurely on HTTP port: 9090
	2024/07/19 05:19:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:20:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:20:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:21:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:21:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:22:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:22:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:23:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:23:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:24:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [ccd5bac65a54] <==
	I0719 05:19:24.728016       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 05:19:24.783162       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 05:19:24.783584       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 05:19:42.309800       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 05:19:42.310024       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b3db5a3-2072-412d-9598-9bf2a5891231", APIVersion:"v1", ResourceVersion:"815", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-546500_b82c341b-0788-41b0-a385-5da2b3ced6dd became leader
	I0719 05:19:42.310121       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-546500_b82c341b-0788-41b0-a385-5da2b3ced6dd!
	I0719 05:19:42.410569       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-546500_b82c341b-0788-41b0-a385-5da2b3ced6dd!
	
	
	==> storage-provisioner [cf6d6836594d] <==
	I0719 05:18:44.574711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 05:19:05.718139       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:24:31.790414    2656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-546500 -n old-k8s-version-546500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-546500 -n old-k8s-version-546500: (1.4524828s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-546500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-mzhrf dashboard-metrics-scraper-8d5bb5db8-pbbzz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-546500 describe pod metrics-server-9975d5f86-mzhrf dashboard-metrics-scraper-8d5bb5db8-pbbzz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-546500 describe pod metrics-server-9975d5f86-mzhrf dashboard-metrics-scraper-8d5bb5db8-pbbzz: exit status 1 (412.0592ms)

                                                
                                                
** stderr ** 
	E0719 05:24:38.238422    6312 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0719 05:24:38.326377    6312 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0719 05:24:38.336218    6312 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0719 05:24:38.349034    6312 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	Error from server (NotFound): pods "metrics-server-9975d5f86-mzhrf" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-pbbzz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-546500 describe pod metrics-server-9975d5f86-mzhrf dashboard-metrics-scraper-8d5bb5db8-pbbzz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (427.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (37.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-857600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p no-preload-857600 --alsologtostderr -v=1: exit status 80 (3.2332891s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-857600 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:21:36.414036   13368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 05:21:36.518260   13368 out.go:291] Setting OutFile to fd 1668 ...
	I0719 05:21:36.519350   13368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:21:36.519520   13368 out.go:304] Setting ErrFile to fd 1800...
	I0719 05:21:36.519520   13368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:21:36.537288   13368 out.go:298] Setting JSON to false
	I0719 05:21:36.537386   13368 mustload.go:65] Loading cluster: no-preload-857600
	I0719 05:21:36.537986   13368 config.go:182] Loaded profile config "no-preload-857600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 05:21:36.567104   13368 cli_runner.go:164] Run: docker container inspect no-preload-857600 --format={{.State.Status}}
	I0719 05:21:36.749442   13368 host.go:66] Checking if "no-preload-857600" exists ...
	I0719 05:21:36.761261   13368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-857600
	I0719 05:21:36.967964   13368 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.33.1-1721324531-19298/minikube-v1.33.1-1721324531-19298-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.33.1-1721324531-19298-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///syste
m listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube3:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-857600 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotifi
cation:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0719 05:21:36.976094   13368 out.go:177] * Pausing node no-preload-857600 ... 
	I0719 05:21:36.981786   13368 host.go:66] Checking if "no-preload-857600" exists ...
	I0719 05:21:36.998266   13368 ssh_runner.go:195] Run: systemctl --version
	I0719 05:21:37.009049   13368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-857600
	I0719 05:21:37.227284   13368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52412 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\no-preload-857600\id_rsa Username:docker}
	I0719 05:21:37.385082   13368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:21:37.412986   13368 pause.go:51] kubelet running: true
	I0719 05:21:37.437770   13368 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0719 05:21:37.866894   13368 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0719 05:21:37.931716   13368 docker.go:500] Pausing containers: [f574e991ac3b 86d5658d7996 93e4e45355fa 31362a64d1d5 0f5f9cd278b3 e442a865f3cd a83c2fcaf08a 95e591d61424 6e4428c65160 c5e888205c09 1996f8977b08 0e6a091da1ad cccbfe71624e 5ab315262a22 605313d29ef1 f551e032953e c041bebd0495 8bcd2e14ae48]
	I0719 05:21:37.942951   13368 ssh_runner.go:195] Run: docker pause f574e991ac3b 86d5658d7996 93e4e45355fa 31362a64d1d5 0f5f9cd278b3 e442a865f3cd a83c2fcaf08a 95e591d61424 6e4428c65160 c5e888205c09 1996f8977b08 0e6a091da1ad cccbfe71624e 5ab315262a22 605313d29ef1 f551e032953e c041bebd0495 8bcd2e14ae48
	I0719 05:21:39.411971   13368 ssh_runner.go:235] Completed: docker pause f574e991ac3b 86d5658d7996 93e4e45355fa 31362a64d1d5 0f5f9cd278b3 e442a865f3cd a83c2fcaf08a 95e591d61424 6e4428c65160 c5e888205c09 1996f8977b08 0e6a091da1ad cccbfe71624e 5ab315262a22 605313d29ef1 f551e032953e c041bebd0495 8bcd2e14ae48: (1.4690086s)
	I0719 05:21:39.418020   13368 out.go:177] 
	W0719 05:21:39.422934   13368 out.go:239] X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause f574e991ac3b 86d5658d7996 93e4e45355fa 31362a64d1d5 0f5f9cd278b3 e442a865f3cd a83c2fcaf08a 95e591d61424 6e4428c65160 c5e888205c09 1996f8977b08 0e6a091da1ad cccbfe71624e 5ab315262a22 605313d29ef1 f551e032953e c041bebd0495 8bcd2e14ae48: Process exited with status 1
	stdout:
	f574e991ac3b
	86d5658d7996
	93e4e45355fa
	31362a64d1d5
	0f5f9cd278b3
	e442a865f3cd
	a83c2fcaf08a
	95e591d61424
	6e4428c65160
	c5e888205c09
	1996f8977b08
	0e6a091da1ad
	5ab315262a22
	605313d29ef1
	f551e032953e
	c041bebd0495
	8bcd2e14ae48
	
	stderr:
	Error response from daemon: cannot pause container cccbfe71624edef056c8c354cf0407e2da608a77084113a15e615dbcd101eab2: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause f574e991ac3b 86d5658d7996 93e4e45355fa 31362a64d1d5 0f5f9cd278b3 e442a865f3cd a83c2fcaf08a 95e591d61424 6e4428c65160 c5e888205c09 1996f8977b08 0e6a091da1ad cccbfe71624e 5ab315262a22 605313d29ef1 f551e032953e c041bebd0495 8bcd2e14ae48: Process exited with status 1
	stdout:
	f574e991ac3b
	86d5658d7996
	93e4e45355fa
	31362a64d1d5
	0f5f9cd278b3
	e442a865f3cd
	a83c2fcaf08a
	95e591d61424
	6e4428c65160
	c5e888205c09
	1996f8977b08
	0e6a091da1ad
	5ab315262a22
	605313d29ef1
	f551e032953e
	c041bebd0495
	8bcd2e14ae48
	
	stderr:
	Error response from daemon: cannot pause container cccbfe71624edef056c8c354cf0407e2da608a77084113a15e615dbcd101eab2: OCI runtime pause failed: unable to freeze: unknown
	
	W0719 05:21:39.422934   13368 out.go:239] * 
	* 
	W0719 05:21:39.453940   13368 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_pause_26475df06b51455fca7312b7aad83667d1d3f5a8_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_pause_26475df06b51455fca7312b7aad83667d1d3f5a8_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 05:21:39.459948   13368 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p no-preload-857600 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-857600
helpers_test.go:235: (dbg) docker inspect no-preload-857600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683",
	        "Created": "2024-07-19T05:12:24.670058141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 866365,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-19T05:16:27.471271649Z",
	            "FinishedAt": "2024-07-19T05:16:20.847309251Z"
	        },
	        "Image": "sha256:7bda27423b38cbebec7632cdf15a8fcb063ff209d17af249e6b3f1fbdb5fa681",
	        "ResolvConfPath": "/var/lib/docker/containers/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683/hostname",
	        "HostsPath": "/var/lib/docker/containers/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683/hosts",
	        "LogPath": "/var/lib/docker/containers/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683-json.log",
	        "Name": "/no-preload-857600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-857600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-857600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/56815ed51d8bb48e26200a2d27b3106c308e11b0da970bb622bc311f4c1b5bf7-init/diff:/var/lib/docker/overlay2/8afef3549fbfde76a8b1d15736e3430a7f83f1f1968778d28daa6047c0f61b28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56815ed51d8bb48e26200a2d27b3106c308e11b0da970bb622bc311f4c1b5bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56815ed51d8bb48e26200a2d27b3106c308e11b0da970bb622bc311f4c1b5bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56815ed51d8bb48e26200a2d27b3106c308e11b0da970bb622bc311f4c1b5bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-857600",
	                "Source": "/var/lib/docker/volumes/no-preload-857600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-857600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-857600",
	                "name.minikube.sigs.k8s.io": "no-preload-857600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "870d5e72a3217130fd39acb4444e0e08fa514c00a8d149132cafa87fa59230b2",
	            "SandboxKey": "/var/run/docker/netns/870d5e72a321",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52413"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52414"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-857600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "8d61708cdaacbe5de7de00fd2e6a54be51ebaffac2278baa57cc2bbbc32e39a2",
	                    "EndpointID": "39084deef717acbd6a302318c86aee86431e7a7f0612636c54500b8d4f7f78c5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "no-preload-857600",
	                        "667b494ac545"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-857600 -n no-preload-857600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-857600 -n no-preload-857600: exit status 2 (1.8458025s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:21:39.871375   11120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-857600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-857600 logs -n 25: (13.4738376s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-683400  | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| start   | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:20 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |                   |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:15 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-683400       | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:20 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-857600             | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p no-preload-857600                                   | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p no-preload-857600                  | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p no-preload-857600 --memory=2200                     | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:21 UTC |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --preload=false --driver=docker                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-546500        | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC | 19 Jul 24 05:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p old-k8s-version-546500                              | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC | 19 Jul 24 05:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-546500             | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC | 19 Jul 24 05:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p old-k8s-version-546500                              | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                              |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |                   |         |                     |                     |
	|         | --keep-context=false                                   |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |                   |         |                     |                     |
	| image   | embed-certs-561200 image list                          | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:21 UTC |
	| delete  | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	| start   | -p newest-cni-800400 --memory=2200 --alsologtostderr   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.31.0-beta.0    |                              |                   |         |                     |                     |
	| image   | default-k8s-diff-port-683400                           | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC |                     |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	| image   | no-preload-857600 image list                           | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-857600                                   | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 05:21:05
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 05:21:05.544598    9828 out.go:291] Setting OutFile to fd 1888 ...
	I0719 05:21:05.544850    9828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:21:05.544850    9828 out.go:304] Setting ErrFile to fd 1464...
	I0719 05:21:05.544850    9828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:21:05.585226    9828 out.go:298] Setting JSON to false
	I0719 05:21:05.589782    9828 start.go:129] hostinfo: {"hostname":"minikube3","uptime":183450,"bootTime":1721183014,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 05:21:05.589782    9828 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 05:21:05.599796    9828 out.go:177] * [newest-cni-800400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 05:21:05.605310    9828 notify.go:220] Checking for updates...
	I0719 05:21:05.609306    9828 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 05:21:05.616735    9828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:21:05.622032    9828 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 05:21:05.628702    9828 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:21:05.637308    9828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:21:01.361555    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:03.878404    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:05.643295    9828 config.go:182] Loaded profile config "default-k8s-diff-port-683400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:21:05.644338    9828 config.go:182] Loaded profile config "no-preload-857600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 05:21:05.645275    9828 config.go:182] Loaded profile config "old-k8s-version-546500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 05:21:05.645275    9828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:21:05.984488    9828 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 05:21:06.000840    9828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:21:06.415933    9828 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-19 05:21:06.36667585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:21:06.421918    9828 out.go:177] * Using the docker driver based on user configuration
	I0719 05:21:06.430919    9828 start.go:297] selected driver: docker
	I0719 05:21:06.430919    9828 start.go:901] validating driver "docker" against <nil>
	I0719 05:21:06.430919    9828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:21:06.597378    9828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:21:07.028764    9828 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-19 05:21:06.978240696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:21:07.029332    9828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0719 05:21:07.029332    9828 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0719 05:21:07.030647    9828 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 05:21:07.034538    9828 out.go:177] * Using Docker Desktop driver with root privileges
	I0719 05:21:07.038534    9828 cni.go:84] Creating CNI manager for ""
	I0719 05:21:07.038534    9828 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 05:21:07.038534    9828 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 05:21:07.039537    9828 start.go:340] cluster config:
	{Name:newest-cni-800400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:21:07.043543    9828 out.go:177] * Starting "newest-cni-800400" primary control-plane node in "newest-cni-800400" cluster
	I0719 05:21:07.049532    9828 cache.go:121] Beginning downloading kic base image for docker with docker
	I0719 05:21:07.051582    9828 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0719 05:21:07.058533    9828 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 05:21:07.058533    9828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 05:21:07.058533    9828 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 05:21:07.058533    9828 cache.go:56] Caching tarball of preloaded images
	I0719 05:21:07.059532    9828 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:21:07.059532    9828 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 05:21:07.059532    9828 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\config.json ...
	I0719 05:21:07.059532    9828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\config.json: {Name:mk8d31df392155f0e36c475193a46ad89ff9c4a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0719 05:21:07.306857    9828 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0719 05:21:07.307888    9828 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 05:21:07.307888    9828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:21:07.307888    9828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:21:07.307888    9828 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 05:21:07.307888    9828 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 05:21:07.307888    9828 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 05:21:07.307888    9828 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 05:21:07.308860    9828 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0719 05:21:07.308860    9828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:21:08.006060    9828 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0719 05:21:08.006060    9828 cache.go:194] Successfully downloaded all kic artifacts
	I0719 05:21:08.006656    9828 start.go:360] acquireMachinesLock for newest-cni-800400: {Name:mkdd3b144b2005e1885add254cd0c3cf58c61802 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:21:08.006908    9828 start.go:364] duration metric: took 179.1µs to acquireMachinesLock for "newest-cni-800400"
	I0719 05:21:08.007051    9828 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 05:21:08.007051    9828 start.go:125] createHost starting for "" (driver="docker")
	I0719 05:21:04.771841    6228 pod_ready.go:81] duration metric: took 4m0.002574s for pod "metrics-server-78fcd8795b-p4shw" in "kube-system" namespace to be "Ready" ...
	E0719 05:21:04.771942    6228 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 05:21:04.771942    6228 pod_ready.go:38] duration metric: took 4m0.9010995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:21:04.772018    6228 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:21:04.783345    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:21:04.852836    6228 logs.go:276] 2 containers: [0e6a091da1ad a1f089136dfd]
	I0719 05:21:04.868642    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:21:04.916461    6228 logs.go:276] 2 containers: [cccbfe71624e 4566c20bc227]
	I0719 05:21:04.930239    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:21:04.986060    6228 logs.go:276] 2 containers: [93e4e45355fa 9ce07a413e20]
	I0719 05:21:04.995042    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:21:05.059576    6228 logs.go:276] 2 containers: [5ab315262a22 85b22019ef17]
	I0719 05:21:05.072566    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:21:05.124566    6228 logs.go:276] 2 containers: [e442a865f3cd 56deee6eea2d]
	I0719 05:21:05.136557    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:21:05.195124    6228 logs.go:276] 2 containers: [1996f8977b08 1f63215b4697]
	I0719 05:21:05.206140    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:21:05.263845    6228 logs.go:276] 0 containers: []
	W0719 05:21:05.263845    6228 logs.go:278] No container was found matching "kindnet"
	I0719 05:21:05.275805    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:21:05.327283    6228 logs.go:276] 1 containers: [f574e991ac3b]
	I0719 05:21:05.338277    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:21:05.397092    6228 logs.go:276] 2 containers: [0f5f9cd278b3 5a57bcf51ab5]
	I0719 05:21:05.397092    6228 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:21:05.397092    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:21:05.675031    6228 logs.go:123] Gathering logs for kube-apiserver [0e6a091da1ad] ...
	I0719 05:21:05.675031    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6a091da1ad"
	I0719 05:21:05.794737    6228 logs.go:123] Gathering logs for etcd [cccbfe71624e] ...
	I0719 05:21:05.794737    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbfe71624e"
	I0719 05:21:05.986285    6228 logs.go:123] Gathering logs for kubernetes-dashboard [f574e991ac3b] ...
	I0719 05:21:05.986380    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f574e991ac3b"
	I0719 05:21:06.093385    6228 logs.go:123] Gathering logs for storage-provisioner [5a57bcf51ab5] ...
	I0719 05:21:06.093385    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a57bcf51ab5"
	I0719 05:21:06.159330    6228 logs.go:123] Gathering logs for kubelet ...
	I0719 05:21:06.159330    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 05:21:06.312651    6228 logs.go:123] Gathering logs for dmesg ...
	I0719 05:21:06.312651    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:21:06.350198    6228 logs.go:123] Gathering logs for kube-scheduler [5ab315262a22] ...
	I0719 05:21:06.350198    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab315262a22"
	I0719 05:21:06.424935    6228 logs.go:123] Gathering logs for kube-scheduler [85b22019ef17] ...
	I0719 05:21:06.424935    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b22019ef17"
	I0719 05:21:06.508942    6228 logs.go:123] Gathering logs for kube-controller-manager [1996f8977b08] ...
	I0719 05:21:06.509011    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1996f8977b08"
	I0719 05:21:06.604383    6228 logs.go:123] Gathering logs for container status ...
	I0719 05:21:06.604383    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:21:06.720382    6228 logs.go:123] Gathering logs for Docker ...
	I0719 05:21:06.720382    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:21:06.782103    6228 logs.go:123] Gathering logs for kube-apiserver [a1f089136dfd] ...
	I0719 05:21:06.782103    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f089136dfd"
	I0719 05:21:06.951303    6228 logs.go:123] Gathering logs for coredns [93e4e45355fa] ...
	I0719 05:21:06.951303    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4e45355fa"
	I0719 05:21:07.041538    6228 logs.go:123] Gathering logs for coredns [9ce07a413e20] ...
	I0719 05:21:07.041538    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce07a413e20"
	I0719 05:21:07.096559    6228 logs.go:123] Gathering logs for kube-proxy [56deee6eea2d] ...
	I0719 05:21:07.096559    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56deee6eea2d"
	I0719 05:21:07.167839    6228 logs.go:123] Gathering logs for kube-controller-manager [1f63215b4697] ...
	I0719 05:21:07.167839    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f63215b4697"
	I0719 05:21:07.253840    6228 logs.go:123] Gathering logs for storage-provisioner [0f5f9cd278b3] ...
	I0719 05:21:07.253840    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5f9cd278b3"
	I0719 05:21:07.322844    6228 logs.go:123] Gathering logs for etcd [4566c20bc227] ...
	I0719 05:21:07.322844    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4566c20bc227"
	I0719 05:21:07.434264    6228 logs.go:123] Gathering logs for kube-proxy [e442a865f3cd] ...
	I0719 05:21:07.434264    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e442a865f3cd"
	I0719 05:21:08.014215    9828 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0719 05:21:08.015176    9828 start.go:159] libmachine.API.Create for "newest-cni-800400" (driver="docker")
	I0719 05:21:08.015176    9828 client.go:168] LocalClient.Create starting
	I0719 05:21:08.015176    9828 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Decoding PEM data...
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Parsing certificate...
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Decoding PEM data...
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Parsing certificate...
	I0719 05:21:08.029167    9828 cli_runner.go:164] Run: docker network inspect newest-cni-800400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0719 05:21:08.227174    9828 cli_runner.go:211] docker network inspect newest-cni-800400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0719 05:21:08.237195    9828 network_create.go:284] running [docker network inspect newest-cni-800400] to gather additional debugging logs...
	I0719 05:21:08.237195    9828 cli_runner.go:164] Run: docker network inspect newest-cni-800400
	W0719 05:21:08.431164    9828 cli_runner.go:211] docker network inspect newest-cni-800400 returned with exit code 1
	I0719 05:21:08.431164    9828 network_create.go:287] error running [docker network inspect newest-cni-800400]: docker network inspect newest-cni-800400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-800400 not found
	I0719 05:21:08.431164    9828 network_create.go:289] output of [docker network inspect newest-cni-800400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-800400 not found
	
	** /stderr **
	I0719 05:21:08.446928    9828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0719 05:21:08.687616    9828 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 05:21:08.719607    9828 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 05:21:08.740616    9828 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00163fce0}
	I0719 05:21:08.740616    9828 network_create.go:124] attempt to create docker network newest-cni-800400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0719 05:21:08.752610    9828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400
	W0719 05:21:08.949551    9828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400 returned with exit code 1
	W0719 05:21:08.949551    9828 network_create.go:149] failed to create docker network newest-cni-800400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0719 05:21:08.949551    9828 network_create.go:116] failed to create docker network newest-cni-800400 192.168.67.0/24, will retry: subnet is taken
	I0719 05:21:08.975549    9828 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 05:21:09.003822    9828 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016aca20}
	I0719 05:21:09.003822    9828 network_create.go:124] attempt to create docker network newest-cni-800400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0719 05:21:09.017428    9828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400
	W0719 05:21:09.231231    9828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400 returned with exit code 1
	W0719 05:21:09.232229    9828 network_create.go:149] failed to create docker network newest-cni-800400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0719 05:21:09.232229    9828 network_create.go:116] failed to create docker network newest-cni-800400 192.168.76.0/24, will retry: subnet is taken
	I0719 05:21:09.271550    9828 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 05:21:09.298199    9828 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00158dd40}
	I0719 05:21:09.298199    9828 network_create.go:124] attempt to create docker network newest-cni-800400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0719 05:21:09.311185    9828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400
	I0719 05:21:09.607529    9828 network_create.go:108] docker network newest-cni-800400 192.168.85.0/24 created
	I0719 05:21:09.607634    9828 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-800400" container
	I0719 05:21:09.631419    9828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0719 05:21:09.849846    9828 cli_runner.go:164] Run: docker volume create newest-cni-800400 --label name.minikube.sigs.k8s.io=newest-cni-800400 --label created_by.minikube.sigs.k8s.io=true
	I0719 05:21:10.216728    9828 oci.go:103] Successfully created a docker volume newest-cni-800400
	I0719 05:21:10.230744    9828 cli_runner.go:164] Run: docker run --rm --name newest-cni-800400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800400 --entrypoint /usr/bin/test -v newest-cni-800400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0719 05:21:06.388905    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:08.864563    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:10.916641    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:10.024949    6228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:21:10.061590    6228 api_server.go:72] duration metric: took 4m23.011977s to wait for apiserver process to appear ...
	I0719 05:21:10.061590    6228 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:21:10.072656    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:21:10.114998    6228 logs.go:276] 2 containers: [0e6a091da1ad a1f089136dfd]
	I0719 05:21:10.126000    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:21:10.179914    6228 logs.go:276] 2 containers: [cccbfe71624e 4566c20bc227]
	I0719 05:21:10.193171    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:21:10.255967    6228 logs.go:276] 2 containers: [93e4e45355fa 9ce07a413e20]
	I0719 05:21:10.269971    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:21:10.336981    6228 logs.go:276] 2 containers: [5ab315262a22 85b22019ef17]
	I0719 05:21:10.360056    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:21:10.420887    6228 logs.go:276] 2 containers: [e442a865f3cd 56deee6eea2d]
	I0719 05:21:10.435868    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:21:10.494173    6228 logs.go:276] 2 containers: [1996f8977b08 1f63215b4697]
	I0719 05:21:10.514880    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:21:10.575635    6228 logs.go:276] 0 containers: []
	W0719 05:21:10.575635    6228 logs.go:278] No container was found matching "kindnet"
	I0719 05:21:10.589636    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:21:10.656987    6228 logs.go:276] 2 containers: [0f5f9cd278b3 5a57bcf51ab5]
	I0719 05:21:10.666983    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:21:10.717973    6228 logs.go:276] 1 containers: [f574e991ac3b]
	I0719 05:21:10.717973    6228 logs.go:123] Gathering logs for dmesg ...
	I0719 05:21:10.717973    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:21:10.752976    6228 logs.go:123] Gathering logs for etcd [cccbfe71624e] ...
	I0719 05:21:10.752976    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbfe71624e"
	I0719 05:21:10.940631    6228 logs.go:123] Gathering logs for kube-proxy [e442a865f3cd] ...
	I0719 05:21:10.941652    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e442a865f3cd"
	I0719 05:21:11.021052    6228 logs.go:123] Gathering logs for Docker ...
	I0719 05:21:11.021052    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:21:11.074924    6228 logs.go:123] Gathering logs for kubelet ...
	I0719 05:21:11.074924    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 05:21:11.215390    6228 logs.go:123] Gathering logs for kube-apiserver [0e6a091da1ad] ...
	I0719 05:21:11.215390    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6a091da1ad"
	I0719 05:21:11.281513    6228 logs.go:123] Gathering logs for kube-controller-manager [1f63215b4697] ...
	I0719 05:21:11.281631    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f63215b4697"
	I0719 05:21:11.384541    6228 logs.go:123] Gathering logs for kubernetes-dashboard [f574e991ac3b] ...
	I0719 05:21:11.384541    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f574e991ac3b"
	I0719 05:21:11.451292    6228 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:21:11.451292    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:21:11.704548    6228 logs.go:123] Gathering logs for coredns [93e4e45355fa] ...
	I0719 05:21:11.704548    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4e45355fa"
	I0719 05:21:11.770250    6228 logs.go:123] Gathering logs for coredns [9ce07a413e20] ...
	I0719 05:21:11.770250    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce07a413e20"
	I0719 05:21:11.833411    6228 logs.go:123] Gathering logs for kube-controller-manager [1996f8977b08] ...
	I0719 05:21:11.833411    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1996f8977b08"
	I0719 05:21:11.922967    6228 logs.go:123] Gathering logs for container status ...
	I0719 05:21:11.922967    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:21:12.031958    6228 logs.go:123] Gathering logs for kube-apiserver [a1f089136dfd] ...
	I0719 05:21:12.031958    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f089136dfd"
	I0719 05:21:12.172829    6228 logs.go:123] Gathering logs for etcd [4566c20bc227] ...
	I0719 05:21:12.172829    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4566c20bc227"
	I0719 05:21:12.298336    6228 logs.go:123] Gathering logs for kube-scheduler [5ab315262a22] ...
	I0719 05:21:12.298336    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab315262a22"
	I0719 05:21:12.368193    6228 logs.go:123] Gathering logs for kube-scheduler [85b22019ef17] ...
	I0719 05:21:12.368332    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b22019ef17"
	I0719 05:21:12.451185    6228 logs.go:123] Gathering logs for kube-proxy [56deee6eea2d] ...
	I0719 05:21:12.451185    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56deee6eea2d"
	I0719 05:21:12.511196    6228 logs.go:123] Gathering logs for storage-provisioner [0f5f9cd278b3] ...
	I0719 05:21:12.511196    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5f9cd278b3"
	I0719 05:21:12.574176    6228 logs.go:123] Gathering logs for storage-provisioner [5a57bcf51ab5] ...
	I0719 05:21:12.574176    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a57bcf51ab5"
	I0719 05:21:13.376011    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:15.962031    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:15.132256    6228 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52416/healthz ...
	I0719 05:21:15.157268    6228 api_server.go:279] https://127.0.0.1:52416/healthz returned 200:
	ok
	I0719 05:21:15.162238    6228 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 05:21:15.162238    6228 api_server.go:131] duration metric: took 5.1006076s to wait for apiserver health ...
	I0719 05:21:15.162238    6228 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:21:15.177797    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:21:15.240194    6228 logs.go:276] 2 containers: [0e6a091da1ad a1f089136dfd]
	I0719 05:21:15.256449    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:21:15.309307    6228 logs.go:276] 2 containers: [cccbfe71624e 4566c20bc227]
	I0719 05:21:15.319300    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:21:15.370893    6228 logs.go:276] 2 containers: [93e4e45355fa 9ce07a413e20]
	I0719 05:21:15.383725    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:21:15.439994    6228 logs.go:276] 2 containers: [5ab315262a22 85b22019ef17]
	I0719 05:21:15.453980    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:21:15.508549    6228 logs.go:276] 2 containers: [e442a865f3cd 56deee6eea2d]
	I0719 05:21:15.522482    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:21:15.573561    6228 logs.go:276] 2 containers: [1996f8977b08 1f63215b4697]
	I0719 05:21:15.587849    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:21:15.653165    6228 logs.go:276] 0 containers: []
	W0719 05:21:15.653165    6228 logs.go:278] No container was found matching "kindnet"
	I0719 05:21:15.667326    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:21:15.718833    6228 logs.go:276] 1 containers: [f574e991ac3b]
	I0719 05:21:15.732351    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:21:15.783828    6228 logs.go:276] 2 containers: [0f5f9cd278b3 5a57bcf51ab5]
	I0719 05:21:15.783828    6228 logs.go:123] Gathering logs for kube-proxy [56deee6eea2d] ...
	I0719 05:21:15.783828    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56deee6eea2d"
	I0719 05:21:15.852243    6228 logs.go:123] Gathering logs for kube-controller-manager [1996f8977b08] ...
	I0719 05:21:15.852243    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1996f8977b08"
	I0719 05:21:15.944787    6228 logs.go:123] Gathering logs for kube-controller-manager [1f63215b4697] ...
	I0719 05:21:15.945784    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f63215b4697"
	I0719 05:21:16.041342    6228 logs.go:123] Gathering logs for kubernetes-dashboard [f574e991ac3b] ...
	I0719 05:21:16.041342    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f574e991ac3b"
	I0719 05:21:16.104359    6228 logs.go:123] Gathering logs for Docker ...
	I0719 05:21:16.104359    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:21:16.161355    6228 logs.go:123] Gathering logs for kubelet ...
	I0719 05:21:16.161355    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 05:21:16.300146    6228 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:21:16.300146    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:21:16.888673    6228 logs.go:123] Gathering logs for coredns [93e4e45355fa] ...
	I0719 05:21:16.888673    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4e45355fa"
	I0719 05:21:16.939697    6228 logs.go:123] Gathering logs for kube-scheduler [85b22019ef17] ...
	I0719 05:21:16.939697    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b22019ef17"
	I0719 05:21:17.019689    6228 logs.go:123] Gathering logs for kube-proxy [e442a865f3cd] ...
	I0719 05:21:17.019689    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e442a865f3cd"
	I0719 05:21:17.088371    6228 logs.go:123] Gathering logs for dmesg ...
	I0719 05:21:17.088371    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:21:17.120374    6228 logs.go:123] Gathering logs for coredns [9ce07a413e20] ...
	I0719 05:21:17.120374    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce07a413e20"
	I0719 05:21:17.185386    6228 logs.go:123] Gathering logs for storage-provisioner [5a57bcf51ab5] ...
	I0719 05:21:17.185386    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a57bcf51ab5"
	I0719 05:21:17.261383    6228 logs.go:123] Gathering logs for etcd [4566c20bc227] ...
	I0719 05:21:17.261383    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4566c20bc227"
	I0719 05:21:17.380019    6228 logs.go:123] Gathering logs for kube-apiserver [a1f089136dfd] ...
	I0719 05:21:17.380019    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f089136dfd"
	I0719 05:21:17.517185    6228 logs.go:123] Gathering logs for etcd [cccbfe71624e] ...
	I0719 05:21:17.517185    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbfe71624e"
	I0719 05:21:17.720587    6228 logs.go:123] Gathering logs for kube-scheduler [5ab315262a22] ...
	I0719 05:21:17.720587    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab315262a22"
	I0719 05:21:17.797078    6228 logs.go:123] Gathering logs for storage-provisioner [0f5f9cd278b3] ...
	I0719 05:21:17.797078    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5f9cd278b3"
	I0719 05:21:17.940049    6228 logs.go:123] Gathering logs for container status ...
	I0719 05:21:17.940049    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:21:18.059701    6228 logs.go:123] Gathering logs for kube-apiserver [0e6a091da1ad] ...
	I0719 05:21:18.059701    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6a091da1ad"
	I0719 05:21:16.056353    9828 cli_runner.go:217] Completed: docker run --rm --name newest-cni-800400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800400 --entrypoint /usr/bin/test -v newest-cni-800400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib: (5.8255631s)
	I0719 05:21:16.056353    9828 oci.go:107] Successfully prepared a docker volume newest-cni-800400
	I0719 05:21:16.056353    9828 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 05:21:16.056353    9828 kic.go:194] Starting extracting preloaded images to volume ...
	I0719 05:21:16.066363    9828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-800400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0719 05:21:20.670252    6228 system_pods.go:59] 8 kube-system pods found
	I0719 05:21:20.670252    6228 system_pods.go:61] "coredns-5cfdc65f69-7mss6" [6df9e16e-7b52-453c-858b-c038b113b117] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "etcd-no-preload-857600" [d832c91f-90ec-40b4-b906-fffb17e83cb2] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "kube-apiserver-no-preload-857600" [658c752c-c5bf-4f7a-abda-9431c9e53b54] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "kube-controller-manager-no-preload-857600" [e56c121f-30d1-4914-85a9-e3602c3a6970] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "kube-proxy-58tff" [037810f3-87f3-40ac-9556-c67ce329afaf] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "kube-scheduler-no-preload-857600" [3f45ce49-9026-4922-a079-faf162506a9d] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "metrics-server-78fcd8795b-p4shw" [65757233-14b9-4ca4-a918-803b5e39bfaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 05:21:20.670252    6228 system_pods.go:61] "storage-provisioner" [d9ce36df-a5fa-40a3-881e-abcbe4b5722a] Running
	I0719 05:21:20.670252    6228 system_pods.go:74] duration metric: took 5.5079711s to wait for pod list to return data ...
	I0719 05:21:20.670252    6228 default_sa.go:34] waiting for default service account to be created ...
	I0719 05:21:20.678244    6228 default_sa.go:45] found service account: "default"
	I0719 05:21:20.678244    6228 default_sa.go:55] duration metric: took 7.9924ms for default service account to be created ...
	I0719 05:21:20.678244    6228 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 05:21:20.692239    6228 system_pods.go:86] 8 kube-system pods found
	I0719 05:21:20.692239    6228 system_pods.go:89] "coredns-5cfdc65f69-7mss6" [6df9e16e-7b52-453c-858b-c038b113b117] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "etcd-no-preload-857600" [d832c91f-90ec-40b4-b906-fffb17e83cb2] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "kube-apiserver-no-preload-857600" [658c752c-c5bf-4f7a-abda-9431c9e53b54] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "kube-controller-manager-no-preload-857600" [e56c121f-30d1-4914-85a9-e3602c3a6970] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "kube-proxy-58tff" [037810f3-87f3-40ac-9556-c67ce329afaf] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "kube-scheduler-no-preload-857600" [3f45ce49-9026-4922-a079-faf162506a9d] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "metrics-server-78fcd8795b-p4shw" [65757233-14b9-4ca4-a918-803b5e39bfaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 05:21:20.692239    6228 system_pods.go:89] "storage-provisioner" [d9ce36df-a5fa-40a3-881e-abcbe4b5722a] Running
	I0719 05:21:20.692239    6228 system_pods.go:126] duration metric: took 13.9947ms to wait for k8s-apps to be running ...
	I0719 05:21:20.692239    6228 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 05:21:20.713286    6228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:21:20.741242    6228 system_svc.go:56] duration metric: took 49.0025ms WaitForService to wait for kubelet
	I0719 05:21:20.741242    6228 kubeadm.go:582] duration metric: took 4m33.6915453s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:21:20.741242    6228 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:21:20.751242    6228 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0719 05:21:20.751242    6228 node_conditions.go:123] node cpu capacity is 16
	I0719 05:21:20.751242    6228 node_conditions.go:105] duration metric: took 10.0003ms to run NodePressure ...
	I0719 05:21:20.751242    6228 start.go:241] waiting for startup goroutines ...
	I0719 05:21:20.751242    6228 start.go:246] waiting for cluster config update ...
	I0719 05:21:20.751242    6228 start.go:255] writing updated cluster config ...
	I0719 05:21:20.766247    6228 ssh_runner.go:195] Run: rm -f paused
	I0719 05:21:20.938256    6228 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 05:21:20.944252    6228 out.go:177] * Done! kubectl is now configured to use "no-preload-857600" cluster and "default" namespace by default
	I0719 05:21:18.361981    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:20.367422    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:22.870733    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:24.889390    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:27.372962    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:29.983480    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:32.368701    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:34.538265    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:39.400695    9828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-800400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir: (23.3341495s)
	I0719 05:21:39.401737    9828 kic.go:203] duration metric: took 23.3452024s to extract preloaded images to volume ...
	I0719 05:21:39.418974    9828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:21:39.842371    9828 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-19 05:21:39.782057532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:21:39.858342    9828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0719 05:21:40.243419    9828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800400 --name newest-cni-800400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800400 --network newest-cni-800400 --ip 192.168.85.2 --volume newest-cni-800400:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f
	I0719 05:21:36.874845    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:39.555944    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	
	
	==> Docker <==
	Jul 19 05:18:08 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:08.326892619Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:18:08 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:08.326986531Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:18:08 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:08.343459563Z" level=error msg="Handler for POST /v1.43/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:18:21 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:21.589421709Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:18:21 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:21.842545843Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:18:21 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:21.842781474Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:18:21 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:21.842823980Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:18:21 no-preload-857600 cri-dockerd[1387]: time="2024-07-19T05:18:21Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Jul 19 05:19:00 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:00.304343915Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:00 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:00.304508936Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:00 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:00.361402687Z" level=error msg="Handler for POST /v1.43/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:15 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:15.551518124Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:15 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:15.797091651Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:15 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:15.797425793Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:15 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:15.797615617Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:19:15 no-preload-857600 cri-dockerd[1387]: time="2024-07-19T05:19:15Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Jul 19 05:20:35 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:35.296254322Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:35 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:35.296531058Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:35 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:35.313326811Z" level=error msg="Handler for POST /v1.43/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:41 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:41.525733798Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:41.785965538Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:41.786629523Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:41.787078080Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:20:41 no-preload-857600 cri-dockerd[1387]: time="2024-07-19T05:20:41Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Jul 19 05:21:38 no-preload-857600 dockerd[1101]: time="2024-07-19T05:21:38.392028036Z" level=error msg="Handler for POST /v1.46/containers/cccbfe71624e/pause returned error: cannot pause container cccbfe71624edef056c8c354cf0407e2da608a77084113a15e615dbcd101eab2: OCI runtime pause failed: unable to freeze: unknown"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f574e991ac3be       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        4 minutes ago       Running             kubernetes-dashboard      0                   86d5658d79966       kubernetes-dashboard-5cc9f66cf4-bxrdv
	a1035fa3f02e5       56cc512116c8f                                                                                         4 minutes ago       Running             busybox                   1                   d902f51a55618       busybox
	93e4e45355fa7       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   a83c2fcaf08a7       coredns-5cfdc65f69-7mss6
	0f5f9cd278b3b       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       1                   c5e888205c095       storage-provisioner
	e442a865f3cd3       c6c6581369906                                                                                         4 minutes ago       Running             kube-proxy                1                   6e4428c651605       kube-proxy-58tff
	1996f8977b081       63cf9a9f4bf5d                                                                                         4 minutes ago       Running             kube-controller-manager   1                   f551e032953ec       kube-controller-manager-no-preload-857600
	0e6a091da1adc       f9a39d2c9991a                                                                                         4 minutes ago       Running             kube-apiserver            1                   c041bebd04954       kube-apiserver-no-preload-857600
	cccbfe71624ed       cfec37af81d91                                                                                         4 minutes ago       Running             etcd                      1                   605313d29ef17       etcd-no-preload-857600
	5ab315262a227       d2edabc17c519                                                                                         4 minutes ago       Running             kube-scheduler            1                   8bcd2e14ae482       kube-scheduler-no-preload-857600
	6db3ce7491b6d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              busybox                   0                   687f415cfa434       busybox
	5a57bcf51ab50       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       0                   1b6fc12aeb61f       storage-provisioner
	9ce07a413e207       cbb01a7bd410d                                                                                         6 minutes ago       Exited              coredns                   0                   b9a6680f718ab       coredns-5cfdc65f69-7mss6
	56deee6eea2de       c6c6581369906                                                                                         6 minutes ago       Exited              kube-proxy                0                   cac5f784fc987       kube-proxy-58tff
	1f63215b46970       63cf9a9f4bf5d                                                                                         6 minutes ago       Exited              kube-controller-manager   0                   8c05d42bb488c       kube-controller-manager-no-preload-857600
	85b22019ef172       d2edabc17c519                                                                                         6 minutes ago       Exited              kube-scheduler            0                   ecc094e07a4f8       kube-scheduler-no-preload-857600
	4566c20bc227e       cfec37af81d91                                                                                         6 minutes ago       Exited              etcd                      0                   3766100ed5629       etcd-no-preload-857600
	a1f089136dfd4       f9a39d2c9991a                                                                                         6 minutes ago       Exited              kube-apiserver            0                   b40324a506331       kube-apiserver-no-preload-857600
	
	
	==> coredns [93e4e45355fa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41389 - 47107 "HINFO IN 1402704032434561587.771532936133455709. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.075681197s
	
	
	==> coredns [9ce07a413e20] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1848295072]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 05:15:18.296) (total time: 21027ms):
	Trace[1848295072]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21026ms (05:15:39.323)
	Trace[1848295072]: [21.027234486s] [21.027234486s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1453536071]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 05:15:18.296) (total time: 21028ms):
	Trace[1453536071]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21028ms (05:15:39.324)
	Trace[1453536071]: [21.028310725s] [21.028310725s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[910279935]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 05:15:18.296) (total time: 21028ms):
	Trace[910279935]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21028ms (05:15:39.325)
	Trace[910279935]: [21.028495349s] [21.028495349s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[Jul19 04:58] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul19 05:01] tmpfs: Unknown parameter 'noswap'
	[  +6.617016] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:05] tmpfs: Unknown parameter 'noswap'
	[ +13.843159] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:14] tmpfs: Unknown parameter 'noswap'
	[ +12.988281] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:16] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [4566c20bc227] <==
	{"level":"info","ts":"2024-07-19T05:16:05.181333Z","caller":"traceutil/trace.go:171","msg":"trace[890466085] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"133.411667ms","start":"2024-07-19T05:16:05.047892Z","end":"2024-07-19T05:16:05.181304Z","steps":["trace[890466085] 'process raft request'  (duration: 94.62906ms)","trace[890466085] 'compare'  (duration: 38.622086ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:16:05.190419Z","caller":"traceutil/trace.go:171","msg":"trace[1350450407] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"120.066009ms","start":"2024-07-19T05:16:05.07033Z","end":"2024-07-19T05:16:05.190396Z","steps":["trace[1350450407] 'process raft request'  (duration: 119.76757ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.190596Z","caller":"traceutil/trace.go:171","msg":"trace[574223381] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"115.380992ms","start":"2024-07-19T05:16:05.075205Z","end":"2024-07-19T05:16:05.190586Z","steps":["trace[574223381] 'process raft request'  (duration: 115.145161ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.190544Z","caller":"traceutil/trace.go:171","msg":"trace[272282916] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"116.105787ms","start":"2024-07-19T05:16:05.074422Z","end":"2024-07-19T05:16:05.190528Z","steps":["trace[272282916] 'process raft request'  (duration: 115.859255ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.190792Z","caller":"traceutil/trace.go:171","msg":"trace[1050857287] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"118.402491ms","start":"2024-07-19T05:16:05.072369Z","end":"2024-07-19T05:16:05.190772Z","steps":["trace[1050857287] 'process raft request'  (duration: 117.855519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:16:05.341807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.856383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5144"}
	{"level":"info","ts":"2024-07-19T05:16:05.342035Z","caller":"traceutil/trace.go:171","msg":"trace[1764673753] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:509; }","duration":"139.095615ms","start":"2024-07-19T05:16:05.202917Z","end":"2024-07-19T05:16:05.342013Z","steps":["trace[1764673753] 'agreement among raft nodes before linearized reading'  (duration: 138.778373ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.342288Z","caller":"traceutil/trace.go:171","msg":"trace[1324801329] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"139.29124ms","start":"2024-07-19T05:16:05.202954Z","end":"2024-07-19T05:16:05.342245Z","steps":["trace[1324801329] 'process raft request'  (duration: 138.660057ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.342427Z","caller":"traceutil/trace.go:171","msg":"trace[1603121790] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"139.390054ms","start":"2024-07-19T05:16:05.203013Z","end":"2024-07-19T05:16:05.342403Z","steps":["trace[1603121790] 'process raft request'  (duration: 138.639555ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.342496Z","caller":"traceutil/trace.go:171","msg":"trace[1578776893] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"146.699016ms","start":"2024-07-19T05:16:05.195781Z","end":"2024-07-19T05:16:05.34248Z","steps":["trace[1578776893] 'process raft request'  (duration: 98.702396ms)","trace[1578776893] 'compare'  (duration: 46.845069ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:16:05.342335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.549148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-p4shw\" ","response":"range_response_count:1 size:2947"}
	{"level":"info","ts":"2024-07-19T05:16:05.342613Z","caller":"traceutil/trace.go:171","msg":"trace[253359815] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-78fcd8795b-p4shw; range_end:; response_count:1; response_revision:509; }","duration":"135.813083ms","start":"2024-07-19T05:16:05.206773Z","end":"2024-07-19T05:16:05.342586Z","steps":["trace[253359815] 'agreement among raft nodes before linearized reading'  (duration: 135.516644ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.342641Z","caller":"traceutil/trace.go:171","msg":"trace[1653564789] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"144.000061ms","start":"2024-07-19T05:16:05.19862Z","end":"2024-07-19T05:16:05.34262Z","steps":["trace[1653564789] 'process raft request'  (duration: 142.937321ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.710152Z","caller":"traceutil/trace.go:171","msg":"trace[370121252] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"116.819082ms","start":"2024-07-19T05:16:05.5933Z","end":"2024-07-19T05:16:05.710119Z","steps":["trace[370121252] 'process raft request'  (duration: 52.225777ms)","trace[370121252] 'compare'  (duration: 64.315969ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:16:06.789667Z","caller":"traceutil/trace.go:171","msg":"trace[2103382825] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"108.581398ms","start":"2024-07-19T05:16:06.681058Z","end":"2024-07-19T05:16:06.78964Z","steps":["trace[2103382825] 'process raft request'  (duration: 108.377571ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:07.667203Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T05:16:07.667658Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"no-preload-857600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	{"level":"warn","ts":"2024-07-19T05:16:07.667827Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:16:07.667987Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:16:07.86732Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:16:07.867391Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T05:16:07.867489Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dfc97eb0aae75b33","current-leader-member-id":"dfc97eb0aae75b33"}
	{"level":"info","ts":"2024-07-19T05:16:07.967152Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2024-07-19T05:16:07.967334Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2024-07-19T05:16:07.967441Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"no-preload-857600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	
	
	==> etcd [cccbfe71624e] <==
	{"level":"info","ts":"2024-07-19T05:21:32.832662Z","caller":"traceutil/trace.go:171","msg":"trace[1331832826] linearizableReadLoop","detail":"{readStateIndex:1035; appliedIndex:1033; }","duration":"209.692439ms","start":"2024-07-19T05:21:32.622953Z","end":"2024-07-19T05:21:32.832645Z","steps":["trace[1331832826] 'read index received'  (duration: 2.309187ms)","trace[1331832826] 'applied index is now lower than readState.Index'  (duration: 207.382352ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:21:32.832808Z","caller":"traceutil/trace.go:171","msg":"trace[321442553] transaction","detail":"{read_only:false; response_revision:946; number_of_response:1; }","duration":"448.141849ms","start":"2024-07-19T05:21:32.384644Z","end":"2024-07-19T05:21:32.832786Z","steps":["trace[321442553] 'process raft request'  (duration: 447.840612ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:32.832856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.892464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T05:21:32.832898Z","caller":"traceutil/trace.go:171","msg":"trace[1071415609] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:946; }","duration":"209.936669ms","start":"2024-07-19T05:21:32.622947Z","end":"2024-07-19T05:21:32.832884Z","steps":["trace[1071415609] 'agreement among raft nodes before linearized reading'  (duration: 209.867161ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:32.83306Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:21:32.384615Z","time spent":"448.250462ms","remote":"127.0.0.1:53118","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-857600\" mod_revision:938 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-857600\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-857600\" > >"}
	{"level":"warn","ts":"2024-07-19T05:21:33.123408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.281897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T05:21:33.123597Z","caller":"traceutil/trace.go:171","msg":"trace[467006304] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:946; }","duration":"168.501024ms","start":"2024-07-19T05:21:32.955076Z","end":"2024-07-19T05:21:33.123577Z","steps":["trace[467006304] 'count revisions from in-memory index tree'  (duration: 168.181384ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:21:33.417097Z","caller":"traceutil/trace.go:171","msg":"trace[1652786851] transaction","detail":"{read_only:false; response_revision:947; number_of_response:1; }","duration":"158.448276ms","start":"2024-07-19T05:21:33.258613Z","end":"2024-07-19T05:21:33.417062Z","steps":["trace[1652786851] 'process raft request'  (duration: 157.985918ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:21:35.720728Z","caller":"traceutil/trace.go:171","msg":"trace[528320762] linearizableReadLoop","detail":"{readStateIndex:1037; appliedIndex:1036; }","duration":"194.530156ms","start":"2024-07-19T05:21:35.526163Z","end":"2024-07-19T05:21:35.720693Z","steps":["trace[528320762] 'read index received'  (duration: 194.202115ms)","trace[528320762] 'applied index is now lower than readState.Index'  (duration: 327.141µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:21:35.720992Z","caller":"traceutil/trace.go:171","msg":"trace[1728522886] transaction","detail":"{read_only:false; response_revision:948; number_of_response:1; }","duration":"292.56443ms","start":"2024-07-19T05:21:35.428414Z","end":"2024-07-19T05:21:35.720978Z","steps":["trace[1728522886] 'process raft request'  (duration: 292.066168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:35.721215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.024618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T05:21:35.721289Z","caller":"traceutil/trace.go:171","msg":"trace[737382215] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:948; }","duration":"195.116029ms","start":"2024-07-19T05:21:35.526156Z","end":"2024-07-19T05:21:35.721272Z","steps":["trace[737382215] 'agreement among raft nodes before linearized reading'  (duration: 194.992714ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:37.768213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.172048ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571755476038499539 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:942 > success:<request_put:<key:\"/registry/masterleases/192.168.94.2\" value_size:65 lease:6571755476038499537 >> failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T05:21:37.768817Z","caller":"traceutil/trace.go:171","msg":"trace[1404558607] linearizableReadLoop","detail":"{readStateIndex:1039; appliedIndex:1038; }","duration":"149.098414ms","start":"2024-07-19T05:21:37.619701Z","end":"2024-07-19T05:21:37.768799Z","steps":["trace[1404558607] 'read index received'  (duration: 128.716µs)","trace[1404558607] 'applied index is now lower than readState.Index'  (duration: 148.967198ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:21:37.76913Z","caller":"traceutil/trace.go:171","msg":"trace[252856347] transaction","detail":"{read_only:false; response_revision:949; number_of_response:1; }","duration":"388.319921ms","start":"2024-07-19T05:21:37.380642Z","end":"2024-07-19T05:21:37.768962Z","steps":["trace[252856347] 'process raft request'  (duration: 55.205055ms)","trace[252856347] 'compare'  (duration: 331.996827ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:21:37.769454Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:21:37.380624Z","time spent":"388.542648ms","remote":"127.0.0.1:52870","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:942 > success:<request_put:<key:\"/registry/masterleases/192.168.94.2\" value_size:65 lease:6571755476038499537 >> failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >"}
	{"level":"warn","ts":"2024-07-19T05:21:37.76961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.897114ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T05:21:37.769776Z","caller":"traceutil/trace.go:171","msg":"trace[1853331938] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:949; }","duration":"150.065534ms","start":"2024-07-19T05:21:37.619694Z","end":"2024-07-19T05:21:37.769759Z","steps":["trace[1853331938] 'agreement among raft nodes before linearized reading'  (duration: 149.744895ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:39.074756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.187724189s","expected-duration":"100ms","prefix":"","request":"header:<ID:6571755476038499545 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:948 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T05:21:39.074941Z","caller":"traceutil/trace.go:171","msg":"trace[238508671] linearizableReadLoop","detail":"{readStateIndex:1040; appliedIndex:1039; }","duration":"1.246493546s","start":"2024-07-19T05:21:37.828434Z","end":"2024-07-19T05:21:39.074927Z","steps":["trace[238508671] 'read index received'  (duration: 58.470023ms)","trace[238508671] 'applied index is now lower than readState.Index'  (duration: 1.188021523s)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:21:39.075129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.246741876s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2024-07-19T05:21:39.075233Z","caller":"traceutil/trace.go:171","msg":"trace[1039528414] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:950; }","duration":"1.246844788s","start":"2024-07-19T05:21:37.828375Z","end":"2024-07-19T05:21:39.07522Z","steps":["trace[1039528414] 'agreement among raft nodes before linearized reading'  (duration: 1.246645164s)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:21:39.075232Z","caller":"traceutil/trace.go:171","msg":"trace[726508930] transaction","detail":"{read_only:false; response_revision:950; number_of_response:1; }","duration":"1.246632863s","start":"2024-07-19T05:21:37.828407Z","end":"2024-07-19T05:21:39.075039Z","steps":["trace[726508930] 'process raft request'  (duration: 58.444519ms)","trace[726508930] 'compare'  (duration: 1.186935697s)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:21:39.075358Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:21:37.828341Z","time spent":"1.247001906s","remote":"127.0.0.1:53134","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":498,"request content":"key:\"/registry/endpointslices/default/kubernetes\" "}
	{"level":"warn","ts":"2024-07-19T05:21:39.075502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:21:37.828388Z","time spent":"1.246962201s","remote":"127.0.0.1:53000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:948 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 05:21:54 up 2 days,  2:57,  0 users,  load average: 6.41, 7.54, 7.98
	Linux no-preload-857600 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [0e6a091da1ad] <==
	I0719 05:17:10.999930       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 05:17:11.512334       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.161.253"}
	I0719 05:17:13.170051       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.23.162"}
	I0719 05:17:14.268168       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 05:17:14.280847       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 05:17:14.661709       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	W0719 05:18:04.984327       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 05:18:04.984513       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 05:18:04.984646       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 05:18:04.984680       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 05:18:04.985951       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 05:18:04.986016       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 05:20:04.974046       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 05:20:04.974322       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 05:20:04.974470       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 05:20:04.974514       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 05:20:04.975800       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 05:20:04.975980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0719 05:21:37.776591       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="57.843383ms" method="GET" path="/readyz" result=null
	
	
	==> kube-apiserver [a1f089136dfd] <==
	W0719 05:16:17.035504       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.086994       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.114586       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.121571       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.139449       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.147140       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.154377       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.210236       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.238773       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.245391       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.265603       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.302596       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.303352       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.307117       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.316041       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.527271       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.530079       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.569579       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.598686       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.641568       1 logging.go:55] [core] [Channel #190 SubChannel #191]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.652833       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.664848       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.678191       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.700298       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.728043       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1996f8977b08] <==
	I0719 05:18:21.303825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="241.732µs"
	I0719 05:18:22.288232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="172.823µs"
	I0719 05:18:34.281553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="70.209µs"
	I0719 05:18:34.320621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="68.009µs"
	E0719 05:18:44.514103       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:18:44.624128       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 05:18:48.282508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="203.927µs"
	I0719 05:19:11.286782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="179.016µs"
	E0719 05:19:14.520388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:19:14.635225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 05:19:24.289218       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="135.518µs"
	I0719 05:19:27.271810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="57.007µs"
	I0719 05:19:40.266953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="80.208µs"
	E0719 05:19:44.527571       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:19:44.652317       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 05:20:14.534346       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:20:14.664310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 05:20:44.542211       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:20:44.677312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 05:20:46.260355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="90.812µs"
	I0719 05:20:55.266702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="85.111µs"
	I0719 05:21:01.265269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="96.213µs"
	I0719 05:21:10.308876       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="155.918µs"
	E0719 05:21:14.550800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:21:14.689423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [1f63215b4697] <==
	I0719 05:15:13.499682       1 shared_informer.go:320] Caches are synced for namespace
	I0719 05:15:13.503015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="202.953881ms"
	I0719 05:15:13.505302       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:15:13.576884       1 shared_informer.go:320] Caches are synced for service account
	I0719 05:15:13.576935       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:15:13.577095       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 05:15:13.600207       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="97.01081ms"
	I0719 05:15:13.600458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="72.61µs"
	I0719 05:15:13.606079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="290.138µs"
	I0719 05:15:13.773637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="86.812µs"
	I0719 05:15:15.496673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="203.991816ms"
	I0719 05:15:15.676180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="179.436724ms"
	I0719 05:15:15.676562       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="104.914µs"
	I0719 05:15:18.693402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="102.513µs"
	I0719 05:15:18.794680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="120.616µs"
	I0719 05:15:19.205852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-857600"
	I0719 05:15:29.011160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="108.714µs"
	I0719 05:15:29.438556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="145.619µs"
	I0719 05:15:29.480991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="52.707µs"
	I0719 05:15:46.557798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="456.519046ms"
	I0719 05:15:46.558145       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="180.523µs"
	I0719 05:16:05.196217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="467.801596ms"
	I0719 05:16:05.346723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="149.936843ms"
	I0719 05:16:05.346944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="69.009µs"
	I0719 05:16:05.419572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="240.332µs"
	
	
	==> kube-proxy [56deee6eea2d] <==
	E0719 05:15:17.610948       1 metrics.go:338] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0719 05:15:17.632640       1 metrics.go:338] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0719 05:15:17.686905       1 server_linux.go:67] "Using iptables proxy"
	I0719 05:15:18.092043       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0719 05:15:18.092374       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 05:15:18.285085       1 server.go:244] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 05:15:18.285284       1 server_linux.go:170] "Using iptables Proxier"
	I0719 05:15:18.292114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0719 05:15:18.313165       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0719 05:15:18.331624       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0719 05:15:18.331826       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 05:15:18.331854       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:15:18.371461       1 config.go:197] "Starting service config controller"
	I0719 05:15:18.371753       1 config.go:326] "Starting node config controller"
	I0719 05:15:18.371777       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:15:18.371794       1 config.go:104] "Starting endpoint slice config controller"
	I0719 05:15:18.371881       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:15:18.372141       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:15:18.473312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 05:15:18.473355       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:15:18.473618       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e442a865f3cd] <==
	E0719 05:17:16.940649       1 metrics.go:338] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0719 05:17:16.959301       1 metrics.go:338] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0719 05:17:17.002866       1 server_linux.go:67] "Using iptables proxy"
	I0719 05:17:17.745679       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0719 05:17:17.745891       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 05:17:17.805008       1 server.go:244] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 05:17:17.805162       1 server_linux.go:170] "Using iptables Proxier"
	I0719 05:17:17.810504       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0719 05:17:17.827741       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0719 05:17:17.844983       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0719 05:17:17.845525       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 05:17:17.845630       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:17:17.847492       1 config.go:197] "Starting service config controller"
	I0719 05:17:17.847614       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:17:17.847622       1 config.go:104] "Starting endpoint slice config controller"
	I0719 05:17:17.847636       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:17:17.851415       1 config.go:326] "Starting node config controller"
	I0719 05:17:17.851677       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:17:17.948644       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 05:17:17.948800       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:17:17.951911       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ab315262a22] <==
	I0719 05:16:57.982989       1 serving.go:386] Generated self-signed cert in-memory
	W0719 05:17:03.967698       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 05:17:03.972156       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0719 05:17:03.972187       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 05:17:03.972202       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 05:17:04.167050       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0719 05:17:04.167116       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:17:04.174126       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 05:17:04.174372       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 05:17:04.174413       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 05:17:04.177200       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0719 05:17:04.277921       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [85b22019ef17] <==
	E0719 05:15:04.619988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.678210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 05:15:04.678362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.784575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 05:15:04.784830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.829081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 05:15:04.829232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.930329       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 05:15:04.930531       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0719 05:15:04.947008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 05:15:04.947150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.948110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 05:15:04.948223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.999756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 05:15:04.999871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:05.130604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 05:15:05.130692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:05.239957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 05:15:05.240187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:05.269005       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 05:15:05.269127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:05.292680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 05:15:05.292857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0719 05:15:07.583846       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 05:16:07.785045       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 05:20:05 no-preload-857600 kubelet[1594]: E0719 05:20:05.250704    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:20:07 no-preload-857600 kubelet[1594]: E0719 05:20:07.246715    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:20:18 no-preload-857600 kubelet[1594]: E0719 05:20:18.241764    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:20:20 no-preload-857600 kubelet[1594]: E0719 05:20:20.243390    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:20:30 no-preload-857600 kubelet[1594]: E0719 05:20:30.245329    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:20:35 no-preload-857600 kubelet[1594]: E0719 05:20:35.314997    1594 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:35 no-preload-857600 kubelet[1594]: E0719 05:20:35.315201    1594 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:35 no-preload-857600 kubelet[1594]: E0719 05:20:35.315531    1594 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sb2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-78fcd8795b-p4shw_kube-system(65757233-14b9-4ca4-a918-803b5e39bfaf): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" logger="UnhandledError"
	Jul 19 05:20:35 no-preload-857600 kubelet[1594]: E0719 05:20:35.317013    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:20:41 no-preload-857600 kubelet[1594]: E0719 05:20:41.797179    1594 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 kubelet[1594]: E0719 05:20:41.797343    1594 kuberuntime_image.go:55] "Failed to pull image" err="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 kubelet[1594]: E0719 05:20:41.797675    1594 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwjxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,Peri
odSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-64dbdb65b8-l8477_kubernetes-dashboard(77f02ca1-5339-4432-9842-e7f39377bfa5): ErrImagePull: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upg
rade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Jul 19 05:20:41 no-preload-857600 kubelet[1594]: E0719 05:20:41.800673    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:20:46 no-preload-857600 kubelet[1594]: E0719 05:20:46.239407    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:20:55 no-preload-857600 kubelet[1594]: E0719 05:20:55.240625    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:21:01 no-preload-857600 kubelet[1594]: E0719 05:21:01.241058    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:21:10 no-preload-857600 kubelet[1594]: E0719 05:21:10.243913    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:21:12 no-preload-857600 kubelet[1594]: E0719 05:21:12.237412    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:21:25 no-preload-857600 kubelet[1594]: E0719 05:21:25.238613    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:21:27 no-preload-857600 kubelet[1594]: E0719 05:21:27.239551    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:21:37 no-preload-857600 kubelet[1594]: E0719 05:21:37.237059    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:21:37 no-preload-857600 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jul 19 05:21:37 no-preload-857600 kubelet[1594]: I0719 05:21:37.669950    1594 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jul 19 05:21:37 no-preload-857600 systemd[1]: kubelet.service: Deactivated successfully.
	Jul 19 05:21:37 no-preload-857600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f574e991ac3b] <==
	2024/07/19 05:17:43 Starting overwatch
	2024/07/19 05:17:43 Using namespace: kubernetes-dashboard
	2024/07/19 05:17:43 Using in-cluster config to connect to apiserver
	2024/07/19 05:17:43 Using secret token for csrf signing
	2024/07/19 05:17:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/19 05:17:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/19 05:17:43 Successful initial request to the apiserver, version: v1.31.0-beta.0
	2024/07/19 05:17:43 Generating JWE encryption key
	2024/07/19 05:17:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/19 05:17:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/19 05:17:44 Initializing JWE encryption key from synchronized object
	2024/07/19 05:17:44 Creating in-cluster Sidecar client
	2024/07/19 05:17:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:17:44 Serving insecurely on HTTP port: 9090
	2024/07/19 05:18:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:18:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:19:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:19:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:20:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:20:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:21:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0f5f9cd278b3] <==
	I0719 05:17:19.466662       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 05:17:19.489735       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 05:17:19.490074       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 05:17:37.475148       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 05:17:37.475647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b64b39a-5974-422e-aa31-9a75b12ec12a", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-857600_1d37b5c5-e8fc-44e9-95a6-51ecd66cfe98 became leader
	I0719 05:17:37.475793       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-857600_1d37b5c5-e8fc-44e9-95a6-51ecd66cfe98!
	I0719 05:17:37.577190       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-857600_1d37b5c5-e8fc-44e9-95a6-51ecd66cfe98!
	
	
	==> storage-provisioner [5a57bcf51ab5] <==
	I0719 05:15:19.604119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 05:15:19.620538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 05:15:19.620711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 05:15:19.642686       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 05:15:19.643107       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-857600_e42b9fe7-ba2d-4bfb-a81f-f5eae859c4c2!
	I0719 05:15:19.643689       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b64b39a-5974-422e-aa31-9a75b12ec12a", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-857600_e42b9fe7-ba2d-4bfb-a81f-f5eae859c4c2 became leader
	I0719 05:15:19.744661       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-857600_e42b9fe7-ba2d-4bfb-a81f-f5eae859c4c2!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:21:41.718712    1548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-857600 -n no-preload-857600
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-857600 -n no-preload-857600: exit status 2 (1.5374379s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:21:55.521975    4668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-857600" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-857600
helpers_test.go:235: (dbg) docker inspect no-preload-857600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683",
	        "Created": "2024-07-19T05:12:24.670058141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 866365,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-19T05:16:27.471271649Z",
	            "FinishedAt": "2024-07-19T05:16:20.847309251Z"
	        },
	        "Image": "sha256:7bda27423b38cbebec7632cdf15a8fcb063ff209d17af249e6b3f1fbdb5fa681",
	        "ResolvConfPath": "/var/lib/docker/containers/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683/hostname",
	        "HostsPath": "/var/lib/docker/containers/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683/hosts",
	        "LogPath": "/var/lib/docker/containers/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683/667b494ac545ee4e186d209dd1df140da250ec31165a3140be9ec5345e3b5683-json.log",
	        "Name": "/no-preload-857600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-857600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-857600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/56815ed51d8bb48e26200a2d27b3106c308e11b0da970bb622bc311f4c1b5bf7-init/diff:/var/lib/docker/overlay2/8afef3549fbfde76a8b1d15736e3430a7f83f1f1968778d28daa6047c0f61b28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56815ed51d8bb48e26200a2d27b3106c308e11b0da970bb622bc311f4c1b5bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56815ed51d8bb48e26200a2d27b3106c308e11b0da970bb622bc311f4c1b5bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56815ed51d8bb48e26200a2d27b3106c308e11b0da970bb622bc311f4c1b5bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-857600",
	                "Source": "/var/lib/docker/volumes/no-preload-857600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-857600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-857600",
	                "name.minikube.sigs.k8s.io": "no-preload-857600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "870d5e72a3217130fd39acb4444e0e08fa514c00a8d149132cafa87fa59230b2",
	            "SandboxKey": "/var/run/docker/netns/870d5e72a321",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52413"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52414"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-857600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "8d61708cdaacbe5de7de00fd2e6a54be51ebaffac2278baa57cc2bbbc32e39a2",
	                    "EndpointID": "39084deef717acbd6a302318c86aee86431e7a7f0612636c54500b8d4f7f78c5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "no-preload-857600",
	                        "667b494ac545"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-857600 -n no-preload-857600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-857600 -n no-preload-857600: exit status 2 (1.5676297s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:21:57.253525    7244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-857600 logs -n 25
E0719 05:22:02.147625   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-857600 logs -n 25: (13.3628927s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:20 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |                   |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:15 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-683400       | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:20 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-857600             | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p no-preload-857600                                   | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p no-preload-857600                  | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p no-preload-857600 --memory=2200                     | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:21 UTC |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --preload=false --driver=docker                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-546500        | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC | 19 Jul 24 05:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p old-k8s-version-546500                              | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC | 19 Jul 24 05:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-546500             | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC | 19 Jul 24 05:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p old-k8s-version-546500                              | old-k8s-version-546500       | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:17 UTC |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                              |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |                   |         |                     |                     |
	|         | --keep-context=false                                   |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |                   |         |                     |                     |
	| image   | embed-certs-561200 image list                          | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:20 UTC | 19 Jul 24 05:21 UTC |
	| delete  | -p embed-certs-561200                                  | embed-certs-561200           | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	| start   | -p newest-cni-800400 --memory=2200 --alsologtostderr   | newest-cni-800400            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.31.0-beta.0    |                              |                   |         |                     |                     |
	| image   | default-k8s-diff-port-683400                           | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	| image   | no-preload-857600 image list                           | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-857600                                   | no-preload-857600            | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-683400 | minikube3\jenkins | v1.33.1 | 19 Jul 24 05:21 UTC | 19 Jul 24 05:21 UTC |
	|         | default-k8s-diff-port-683400                           |                              |                   |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 05:21:05
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 05:21:05.544598    9828 out.go:291] Setting OutFile to fd 1888 ...
	I0719 05:21:05.544850    9828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:21:05.544850    9828 out.go:304] Setting ErrFile to fd 1464...
	I0719 05:21:05.544850    9828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:21:05.585226    9828 out.go:298] Setting JSON to false
	I0719 05:21:05.589782    9828 start.go:129] hostinfo: {"hostname":"minikube3","uptime":183450,"bootTime":1721183014,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 05:21:05.589782    9828 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 05:21:05.599796    9828 out.go:177] * [newest-cni-800400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 05:21:05.605310    9828 notify.go:220] Checking for updates...
	I0719 05:21:05.609306    9828 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 05:21:05.616735    9828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:21:05.622032    9828 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 05:21:05.628702    9828 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:21:05.637308    9828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:21:01.361555    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:03.878404    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:05.643295    9828 config.go:182] Loaded profile config "default-k8s-diff-port-683400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:21:05.644338    9828 config.go:182] Loaded profile config "no-preload-857600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 05:21:05.645275    9828 config.go:182] Loaded profile config "old-k8s-version-546500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 05:21:05.645275    9828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:21:05.984488    9828 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 05:21:06.000840    9828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:21:06.415933    9828 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-19 05:21:06.36667585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:21:06.421918    9828 out.go:177] * Using the docker driver based on user configuration
	I0719 05:21:06.430919    9828 start.go:297] selected driver: docker
	I0719 05:21:06.430919    9828 start.go:901] validating driver "docker" against <nil>
	I0719 05:21:06.430919    9828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:21:06.597378    9828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:21:07.028764    9828 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-19 05:21:06.978240696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:21:07.029332    9828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0719 05:21:07.029332    9828 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0719 05:21:07.030647    9828 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 05:21:07.034538    9828 out.go:177] * Using Docker Desktop driver with root privileges
	I0719 05:21:07.038534    9828 cni.go:84] Creating CNI manager for ""
	I0719 05:21:07.038534    9828 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 05:21:07.038534    9828 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 05:21:07.039537    9828 start.go:340] cluster config:
	{Name:newest-cni-800400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:21:07.043543    9828 out.go:177] * Starting "newest-cni-800400" primary control-plane node in "newest-cni-800400" cluster
	I0719 05:21:07.049532    9828 cache.go:121] Beginning downloading kic base image for docker with docker
	I0719 05:21:07.051582    9828 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0719 05:21:07.058533    9828 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 05:21:07.058533    9828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 05:21:07.058533    9828 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 05:21:07.058533    9828 cache.go:56] Caching tarball of preloaded images
	I0719 05:21:07.059532    9828 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:21:07.059532    9828 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 05:21:07.059532    9828 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\config.json ...
	I0719 05:21:07.059532    9828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\config.json: {Name:mk8d31df392155f0e36c475193a46ad89ff9c4a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0719 05:21:07.306857    9828 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0719 05:21:07.307888    9828 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 05:21:07.307888    9828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:21:07.307888    9828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:21:07.307888    9828 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 05:21:07.307888    9828 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 05:21:07.307888    9828 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 05:21:07.307888    9828 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 05:21:07.308860    9828 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0719 05:21:07.308860    9828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 05:21:08.006060    9828 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0719 05:21:08.006060    9828 cache.go:194] Successfully downloaded all kic artifacts
	I0719 05:21:08.006656    9828 start.go:360] acquireMachinesLock for newest-cni-800400: {Name:mkdd3b144b2005e1885add254cd0c3cf58c61802 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:21:08.006908    9828 start.go:364] duration metric: took 179.1µs to acquireMachinesLock for "newest-cni-800400"
	I0719 05:21:08.007051    9828 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-800400 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 05:21:08.007051    9828 start.go:125] createHost starting for "" (driver="docker")
	I0719 05:21:04.771841    6228 pod_ready.go:81] duration metric: took 4m0.002574s for pod "metrics-server-78fcd8795b-p4shw" in "kube-system" namespace to be "Ready" ...
	E0719 05:21:04.771942    6228 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 05:21:04.771942    6228 pod_ready.go:38] duration metric: took 4m0.9010995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:21:04.772018    6228 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:21:04.783345    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:21:04.852836    6228 logs.go:276] 2 containers: [0e6a091da1ad a1f089136dfd]
	I0719 05:21:04.868642    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:21:04.916461    6228 logs.go:276] 2 containers: [cccbfe71624e 4566c20bc227]
	I0719 05:21:04.930239    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:21:04.986060    6228 logs.go:276] 2 containers: [93e4e45355fa 9ce07a413e20]
	I0719 05:21:04.995042    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:21:05.059576    6228 logs.go:276] 2 containers: [5ab315262a22 85b22019ef17]
	I0719 05:21:05.072566    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:21:05.124566    6228 logs.go:276] 2 containers: [e442a865f3cd 56deee6eea2d]
	I0719 05:21:05.136557    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:21:05.195124    6228 logs.go:276] 2 containers: [1996f8977b08 1f63215b4697]
	I0719 05:21:05.206140    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:21:05.263845    6228 logs.go:276] 0 containers: []
	W0719 05:21:05.263845    6228 logs.go:278] No container was found matching "kindnet"
	I0719 05:21:05.275805    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:21:05.327283    6228 logs.go:276] 1 containers: [f574e991ac3b]
	I0719 05:21:05.338277    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:21:05.397092    6228 logs.go:276] 2 containers: [0f5f9cd278b3 5a57bcf51ab5]
	I0719 05:21:05.397092    6228 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:21:05.397092    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:21:05.675031    6228 logs.go:123] Gathering logs for kube-apiserver [0e6a091da1ad] ...
	I0719 05:21:05.675031    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6a091da1ad"
	I0719 05:21:05.794737    6228 logs.go:123] Gathering logs for etcd [cccbfe71624e] ...
	I0719 05:21:05.794737    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbfe71624e"
	I0719 05:21:05.986285    6228 logs.go:123] Gathering logs for kubernetes-dashboard [f574e991ac3b] ...
	I0719 05:21:05.986380    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f574e991ac3b"
	I0719 05:21:06.093385    6228 logs.go:123] Gathering logs for storage-provisioner [5a57bcf51ab5] ...
	I0719 05:21:06.093385    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a57bcf51ab5"
	I0719 05:21:06.159330    6228 logs.go:123] Gathering logs for kubelet ...
	I0719 05:21:06.159330    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 05:21:06.312651    6228 logs.go:123] Gathering logs for dmesg ...
	I0719 05:21:06.312651    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:21:06.350198    6228 logs.go:123] Gathering logs for kube-scheduler [5ab315262a22] ...
	I0719 05:21:06.350198    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab315262a22"
	I0719 05:21:06.424935    6228 logs.go:123] Gathering logs for kube-scheduler [85b22019ef17] ...
	I0719 05:21:06.424935    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b22019ef17"
	I0719 05:21:06.508942    6228 logs.go:123] Gathering logs for kube-controller-manager [1996f8977b08] ...
	I0719 05:21:06.509011    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1996f8977b08"
	I0719 05:21:06.604383    6228 logs.go:123] Gathering logs for container status ...
	I0719 05:21:06.604383    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:21:06.720382    6228 logs.go:123] Gathering logs for Docker ...
	I0719 05:21:06.720382    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:21:06.782103    6228 logs.go:123] Gathering logs for kube-apiserver [a1f089136dfd] ...
	I0719 05:21:06.782103    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f089136dfd"
	I0719 05:21:06.951303    6228 logs.go:123] Gathering logs for coredns [93e4e45355fa] ...
	I0719 05:21:06.951303    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4e45355fa"
	I0719 05:21:07.041538    6228 logs.go:123] Gathering logs for coredns [9ce07a413e20] ...
	I0719 05:21:07.041538    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce07a413e20"
	I0719 05:21:07.096559    6228 logs.go:123] Gathering logs for kube-proxy [56deee6eea2d] ...
	I0719 05:21:07.096559    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56deee6eea2d"
	I0719 05:21:07.167839    6228 logs.go:123] Gathering logs for kube-controller-manager [1f63215b4697] ...
	I0719 05:21:07.167839    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f63215b4697"
	I0719 05:21:07.253840    6228 logs.go:123] Gathering logs for storage-provisioner [0f5f9cd278b3] ...
	I0719 05:21:07.253840    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5f9cd278b3"
	I0719 05:21:07.322844    6228 logs.go:123] Gathering logs for etcd [4566c20bc227] ...
	I0719 05:21:07.322844    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4566c20bc227"
	I0719 05:21:07.434264    6228 logs.go:123] Gathering logs for kube-proxy [e442a865f3cd] ...
	I0719 05:21:07.434264    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e442a865f3cd"
	I0719 05:21:08.014215    9828 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0719 05:21:08.015176    9828 start.go:159] libmachine.API.Create for "newest-cni-800400" (driver="docker")
	I0719 05:21:08.015176    9828 client.go:168] LocalClient.Create starting
	I0719 05:21:08.015176    9828 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Decoding PEM data...
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Parsing certificate...
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Decoding PEM data...
	I0719 05:21:08.016251    9828 main.go:141] libmachine: Parsing certificate...
	I0719 05:21:08.029167    9828 cli_runner.go:164] Run: docker network inspect newest-cni-800400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0719 05:21:08.227174    9828 cli_runner.go:211] docker network inspect newest-cni-800400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0719 05:21:08.237195    9828 network_create.go:284] running [docker network inspect newest-cni-800400] to gather additional debugging logs...
	I0719 05:21:08.237195    9828 cli_runner.go:164] Run: docker network inspect newest-cni-800400
	W0719 05:21:08.431164    9828 cli_runner.go:211] docker network inspect newest-cni-800400 returned with exit code 1
	I0719 05:21:08.431164    9828 network_create.go:287] error running [docker network inspect newest-cni-800400]: docker network inspect newest-cni-800400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-800400 not found
	I0719 05:21:08.431164    9828 network_create.go:289] output of [docker network inspect newest-cni-800400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-800400 not found
	
	** /stderr **
	I0719 05:21:08.446928    9828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0719 05:21:08.687616    9828 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 05:21:08.719607    9828 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 05:21:08.740616    9828 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00163fce0}
	I0719 05:21:08.740616    9828 network_create.go:124] attempt to create docker network newest-cni-800400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0719 05:21:08.752610    9828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400
	W0719 05:21:08.949551    9828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400 returned with exit code 1
	W0719 05:21:08.949551    9828 network_create.go:149] failed to create docker network newest-cni-800400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0719 05:21:08.949551    9828 network_create.go:116] failed to create docker network newest-cni-800400 192.168.67.0/24, will retry: subnet is taken
	I0719 05:21:08.975549    9828 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 05:21:09.003822    9828 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016aca20}
	I0719 05:21:09.003822    9828 network_create.go:124] attempt to create docker network newest-cni-800400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0719 05:21:09.017428    9828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400
	W0719 05:21:09.231231    9828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400 returned with exit code 1
	W0719 05:21:09.232229    9828 network_create.go:149] failed to create docker network newest-cni-800400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0719 05:21:09.232229    9828 network_create.go:116] failed to create docker network newest-cni-800400 192.168.76.0/24, will retry: subnet is taken
	I0719 05:21:09.271550    9828 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 05:21:09.298199    9828 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00158dd40}
	I0719 05:21:09.298199    9828 network_create.go:124] attempt to create docker network newest-cni-800400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0719 05:21:09.311185    9828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800400 newest-cni-800400
	I0719 05:21:09.607529    9828 network_create.go:108] docker network newest-cni-800400 192.168.85.0/24 created
	I0719 05:21:09.607634    9828 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-800400" container
	I0719 05:21:09.631419    9828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0719 05:21:09.849846    9828 cli_runner.go:164] Run: docker volume create newest-cni-800400 --label name.minikube.sigs.k8s.io=newest-cni-800400 --label created_by.minikube.sigs.k8s.io=true
	I0719 05:21:10.216728    9828 oci.go:103] Successfully created a docker volume newest-cni-800400
	I0719 05:21:10.230744    9828 cli_runner.go:164] Run: docker run --rm --name newest-cni-800400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800400 --entrypoint /usr/bin/test -v newest-cni-800400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0719 05:21:06.388905    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:08.864563    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:10.916641    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:10.024949    6228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:21:10.061590    6228 api_server.go:72] duration metric: took 4m23.011977s to wait for apiserver process to appear ...
	I0719 05:21:10.061590    6228 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:21:10.072656    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:21:10.114998    6228 logs.go:276] 2 containers: [0e6a091da1ad a1f089136dfd]
	I0719 05:21:10.126000    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:21:10.179914    6228 logs.go:276] 2 containers: [cccbfe71624e 4566c20bc227]
	I0719 05:21:10.193171    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:21:10.255967    6228 logs.go:276] 2 containers: [93e4e45355fa 9ce07a413e20]
	I0719 05:21:10.269971    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:21:10.336981    6228 logs.go:276] 2 containers: [5ab315262a22 85b22019ef17]
	I0719 05:21:10.360056    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:21:10.420887    6228 logs.go:276] 2 containers: [e442a865f3cd 56deee6eea2d]
	I0719 05:21:10.435868    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:21:10.494173    6228 logs.go:276] 2 containers: [1996f8977b08 1f63215b4697]
	I0719 05:21:10.514880    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:21:10.575635    6228 logs.go:276] 0 containers: []
	W0719 05:21:10.575635    6228 logs.go:278] No container was found matching "kindnet"
	I0719 05:21:10.589636    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:21:10.656987    6228 logs.go:276] 2 containers: [0f5f9cd278b3 5a57bcf51ab5]
	I0719 05:21:10.666983    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:21:10.717973    6228 logs.go:276] 1 containers: [f574e991ac3b]
	I0719 05:21:10.717973    6228 logs.go:123] Gathering logs for dmesg ...
	I0719 05:21:10.717973    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:21:10.752976    6228 logs.go:123] Gathering logs for etcd [cccbfe71624e] ...
	I0719 05:21:10.752976    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbfe71624e"
	I0719 05:21:10.940631    6228 logs.go:123] Gathering logs for kube-proxy [e442a865f3cd] ...
	I0719 05:21:10.941652    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e442a865f3cd"
	I0719 05:21:11.021052    6228 logs.go:123] Gathering logs for Docker ...
	I0719 05:21:11.021052    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:21:11.074924    6228 logs.go:123] Gathering logs for kubelet ...
	I0719 05:21:11.074924    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 05:21:11.215390    6228 logs.go:123] Gathering logs for kube-apiserver [0e6a091da1ad] ...
	I0719 05:21:11.215390    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6a091da1ad"
	I0719 05:21:11.281513    6228 logs.go:123] Gathering logs for kube-controller-manager [1f63215b4697] ...
	I0719 05:21:11.281631    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f63215b4697"
	I0719 05:21:11.384541    6228 logs.go:123] Gathering logs for kubernetes-dashboard [f574e991ac3b] ...
	I0719 05:21:11.384541    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f574e991ac3b"
	I0719 05:21:11.451292    6228 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:21:11.451292    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:21:11.704548    6228 logs.go:123] Gathering logs for coredns [93e4e45355fa] ...
	I0719 05:21:11.704548    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4e45355fa"
	I0719 05:21:11.770250    6228 logs.go:123] Gathering logs for coredns [9ce07a413e20] ...
	I0719 05:21:11.770250    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce07a413e20"
	I0719 05:21:11.833411    6228 logs.go:123] Gathering logs for kube-controller-manager [1996f8977b08] ...
	I0719 05:21:11.833411    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1996f8977b08"
	I0719 05:21:11.922967    6228 logs.go:123] Gathering logs for container status ...
	I0719 05:21:11.922967    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:21:12.031958    6228 logs.go:123] Gathering logs for kube-apiserver [a1f089136dfd] ...
	I0719 05:21:12.031958    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f089136dfd"
	I0719 05:21:12.172829    6228 logs.go:123] Gathering logs for etcd [4566c20bc227] ...
	I0719 05:21:12.172829    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4566c20bc227"
	I0719 05:21:12.298336    6228 logs.go:123] Gathering logs for kube-scheduler [5ab315262a22] ...
	I0719 05:21:12.298336    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab315262a22"
	I0719 05:21:12.368193    6228 logs.go:123] Gathering logs for kube-scheduler [85b22019ef17] ...
	I0719 05:21:12.368332    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b22019ef17"
	I0719 05:21:12.451185    6228 logs.go:123] Gathering logs for kube-proxy [56deee6eea2d] ...
	I0719 05:21:12.451185    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56deee6eea2d"
	I0719 05:21:12.511196    6228 logs.go:123] Gathering logs for storage-provisioner [0f5f9cd278b3] ...
	I0719 05:21:12.511196    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5f9cd278b3"
	I0719 05:21:12.574176    6228 logs.go:123] Gathering logs for storage-provisioner [5a57bcf51ab5] ...
	I0719 05:21:12.574176    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a57bcf51ab5"
	I0719 05:21:13.376011    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:15.962031    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:15.132256    6228 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52416/healthz ...
	I0719 05:21:15.157268    6228 api_server.go:279] https://127.0.0.1:52416/healthz returned 200:
	ok
	I0719 05:21:15.162238    6228 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 05:21:15.162238    6228 api_server.go:131] duration metric: took 5.1006076s to wait for apiserver health ...
	I0719 05:21:15.162238    6228 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:21:15.177797    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 05:21:15.240194    6228 logs.go:276] 2 containers: [0e6a091da1ad a1f089136dfd]
	I0719 05:21:15.256449    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 05:21:15.309307    6228 logs.go:276] 2 containers: [cccbfe71624e 4566c20bc227]
	I0719 05:21:15.319300    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 05:21:15.370893    6228 logs.go:276] 2 containers: [93e4e45355fa 9ce07a413e20]
	I0719 05:21:15.383725    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 05:21:15.439994    6228 logs.go:276] 2 containers: [5ab315262a22 85b22019ef17]
	I0719 05:21:15.453980    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 05:21:15.508549    6228 logs.go:276] 2 containers: [e442a865f3cd 56deee6eea2d]
	I0719 05:21:15.522482    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 05:21:15.573561    6228 logs.go:276] 2 containers: [1996f8977b08 1f63215b4697]
	I0719 05:21:15.587849    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 05:21:15.653165    6228 logs.go:276] 0 containers: []
	W0719 05:21:15.653165    6228 logs.go:278] No container was found matching "kindnet"
	I0719 05:21:15.667326    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0719 05:21:15.718833    6228 logs.go:276] 1 containers: [f574e991ac3b]
	I0719 05:21:15.732351    6228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 05:21:15.783828    6228 logs.go:276] 2 containers: [0f5f9cd278b3 5a57bcf51ab5]
	I0719 05:21:15.783828    6228 logs.go:123] Gathering logs for kube-proxy [56deee6eea2d] ...
	I0719 05:21:15.783828    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56deee6eea2d"
	I0719 05:21:15.852243    6228 logs.go:123] Gathering logs for kube-controller-manager [1996f8977b08] ...
	I0719 05:21:15.852243    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1996f8977b08"
	I0719 05:21:15.944787    6228 logs.go:123] Gathering logs for kube-controller-manager [1f63215b4697] ...
	I0719 05:21:15.945784    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f63215b4697"
	I0719 05:21:16.041342    6228 logs.go:123] Gathering logs for kubernetes-dashboard [f574e991ac3b] ...
	I0719 05:21:16.041342    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f574e991ac3b"
	I0719 05:21:16.104359    6228 logs.go:123] Gathering logs for Docker ...
	I0719 05:21:16.104359    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 05:21:16.161355    6228 logs.go:123] Gathering logs for kubelet ...
	I0719 05:21:16.161355    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 05:21:16.300146    6228 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:21:16.300146    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 05:21:16.888673    6228 logs.go:123] Gathering logs for coredns [93e4e45355fa] ...
	I0719 05:21:16.888673    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4e45355fa"
	I0719 05:21:16.939697    6228 logs.go:123] Gathering logs for kube-scheduler [85b22019ef17] ...
	I0719 05:21:16.939697    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b22019ef17"
	I0719 05:21:17.019689    6228 logs.go:123] Gathering logs for kube-proxy [e442a865f3cd] ...
	I0719 05:21:17.019689    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e442a865f3cd"
	I0719 05:21:17.088371    6228 logs.go:123] Gathering logs for dmesg ...
	I0719 05:21:17.088371    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 05:21:17.120374    6228 logs.go:123] Gathering logs for coredns [9ce07a413e20] ...
	I0719 05:21:17.120374    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce07a413e20"
	I0719 05:21:17.185386    6228 logs.go:123] Gathering logs for storage-provisioner [5a57bcf51ab5] ...
	I0719 05:21:17.185386    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a57bcf51ab5"
	I0719 05:21:17.261383    6228 logs.go:123] Gathering logs for etcd [4566c20bc227] ...
	I0719 05:21:17.261383    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4566c20bc227"
	I0719 05:21:17.380019    6228 logs.go:123] Gathering logs for kube-apiserver [a1f089136dfd] ...
	I0719 05:21:17.380019    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1f089136dfd"
	I0719 05:21:17.517185    6228 logs.go:123] Gathering logs for etcd [cccbfe71624e] ...
	I0719 05:21:17.517185    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbfe71624e"
	I0719 05:21:17.720587    6228 logs.go:123] Gathering logs for kube-scheduler [5ab315262a22] ...
	I0719 05:21:17.720587    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab315262a22"
	I0719 05:21:17.797078    6228 logs.go:123] Gathering logs for storage-provisioner [0f5f9cd278b3] ...
	I0719 05:21:17.797078    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5f9cd278b3"
	I0719 05:21:17.940049    6228 logs.go:123] Gathering logs for container status ...
	I0719 05:21:17.940049    6228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:21:18.059701    6228 logs.go:123] Gathering logs for kube-apiserver [0e6a091da1ad] ...
	I0719 05:21:18.059701    6228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6a091da1ad"
	I0719 05:21:16.056353    9828 cli_runner.go:217] Completed: docker run --rm --name newest-cni-800400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800400 --entrypoint /usr/bin/test -v newest-cni-800400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib: (5.8255631s)
	I0719 05:21:16.056353    9828 oci.go:107] Successfully prepared a docker volume newest-cni-800400
	I0719 05:21:16.056353    9828 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 05:21:16.056353    9828 kic.go:194] Starting extracting preloaded images to volume ...
	I0719 05:21:16.066363    9828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-800400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0719 05:21:20.670252    6228 system_pods.go:59] 8 kube-system pods found
	I0719 05:21:20.670252    6228 system_pods.go:61] "coredns-5cfdc65f69-7mss6" [6df9e16e-7b52-453c-858b-c038b113b117] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "etcd-no-preload-857600" [d832c91f-90ec-40b4-b906-fffb17e83cb2] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "kube-apiserver-no-preload-857600" [658c752c-c5bf-4f7a-abda-9431c9e53b54] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "kube-controller-manager-no-preload-857600" [e56c121f-30d1-4914-85a9-e3602c3a6970] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "kube-proxy-58tff" [037810f3-87f3-40ac-9556-c67ce329afaf] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "kube-scheduler-no-preload-857600" [3f45ce49-9026-4922-a079-faf162506a9d] Running
	I0719 05:21:20.670252    6228 system_pods.go:61] "metrics-server-78fcd8795b-p4shw" [65757233-14b9-4ca4-a918-803b5e39bfaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 05:21:20.670252    6228 system_pods.go:61] "storage-provisioner" [d9ce36df-a5fa-40a3-881e-abcbe4b5722a] Running
	I0719 05:21:20.670252    6228 system_pods.go:74] duration metric: took 5.5079711s to wait for pod list to return data ...
	I0719 05:21:20.670252    6228 default_sa.go:34] waiting for default service account to be created ...
	I0719 05:21:20.678244    6228 default_sa.go:45] found service account: "default"
	I0719 05:21:20.678244    6228 default_sa.go:55] duration metric: took 7.9924ms for default service account to be created ...
	I0719 05:21:20.678244    6228 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 05:21:20.692239    6228 system_pods.go:86] 8 kube-system pods found
	I0719 05:21:20.692239    6228 system_pods.go:89] "coredns-5cfdc65f69-7mss6" [6df9e16e-7b52-453c-858b-c038b113b117] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "etcd-no-preload-857600" [d832c91f-90ec-40b4-b906-fffb17e83cb2] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "kube-apiserver-no-preload-857600" [658c752c-c5bf-4f7a-abda-9431c9e53b54] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "kube-controller-manager-no-preload-857600" [e56c121f-30d1-4914-85a9-e3602c3a6970] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "kube-proxy-58tff" [037810f3-87f3-40ac-9556-c67ce329afaf] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "kube-scheduler-no-preload-857600" [3f45ce49-9026-4922-a079-faf162506a9d] Running
	I0719 05:21:20.692239    6228 system_pods.go:89] "metrics-server-78fcd8795b-p4shw" [65757233-14b9-4ca4-a918-803b5e39bfaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 05:21:20.692239    6228 system_pods.go:89] "storage-provisioner" [d9ce36df-a5fa-40a3-881e-abcbe4b5722a] Running
	I0719 05:21:20.692239    6228 system_pods.go:126] duration metric: took 13.9947ms to wait for k8s-apps to be running ...
	I0719 05:21:20.692239    6228 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 05:21:20.713286    6228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:21:20.741242    6228 system_svc.go:56] duration metric: took 49.0025ms WaitForService to wait for kubelet
	I0719 05:21:20.741242    6228 kubeadm.go:582] duration metric: took 4m33.6915453s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:21:20.741242    6228 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:21:20.751242    6228 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0719 05:21:20.751242    6228 node_conditions.go:123] node cpu capacity is 16
	I0719 05:21:20.751242    6228 node_conditions.go:105] duration metric: took 10.0003ms to run NodePressure ...
	I0719 05:21:20.751242    6228 start.go:241] waiting for startup goroutines ...
	I0719 05:21:20.751242    6228 start.go:246] waiting for cluster config update ...
	I0719 05:21:20.751242    6228 start.go:255] writing updated cluster config ...
	I0719 05:21:20.766247    6228 ssh_runner.go:195] Run: rm -f paused
	I0719 05:21:20.938256    6228 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 05:21:20.944252    6228 out.go:177] * Done! kubectl is now configured to use "no-preload-857600" cluster and "default" namespace by default
	I0719 05:21:18.361981    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:20.367422    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:22.870733    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:24.889390    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:27.372962    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:29.983480    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:32.368701    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:34.538265    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:39.400695    9828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-800400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir: (23.3341495s)
	I0719 05:21:39.401737    9828 kic.go:203] duration metric: took 23.3452024s to extract preloaded images to volume ...
	I0719 05:21:39.418974    9828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:21:39.842371    9828 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-19 05:21:39.782057532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 05:21:39.858342    9828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0719 05:21:40.243419    9828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800400 --name newest-cni-800400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800400 --network newest-cni-800400 --ip 192.168.85.2 --volume newest-cni-800400:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f
	I0719 05:21:36.874845    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:39.555944    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:41.512139    9828 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800400 --name newest-cni-800400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800400 --network newest-cni-800400 --ip 192.168.85.2 --volume newest-cni-800400:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f: (1.2687101s)
	I0719 05:21:41.524165    9828 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Running}}
	I0719 05:21:41.759727    9828 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:21:41.965706    9828 cli_runner.go:164] Run: docker exec newest-cni-800400 stat /var/lib/dpkg/alternatives/iptables
	I0719 05:21:42.292460    9828 oci.go:144] the created container "newest-cni-800400" has a running status.
	I0719 05:21:42.292460    9828 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa...
	I0719 05:21:42.465470    9828 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0719 05:21:42.737805    9828 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:21:42.992011    9828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0719 05:21:42.993002    9828 kic_runner.go:114] Args: [docker exec --privileged newest-cni-800400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0719 05:21:43.316134    9828 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa...
	I0719 05:21:41.875706    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:44.405645    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:46.272321    9828 cli_runner.go:164] Run: docker container inspect newest-cni-800400 --format={{.State.Status}}
	I0719 05:21:46.461812    9828 machine.go:94] provisionDockerMachine start ...
	I0719 05:21:46.472799    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:46.673589    9828 main.go:141] libmachine: Using SSH client type: native
	I0719 05:21:46.688385    9828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52755 <nil> <nil>}
	I0719 05:21:46.688477    9828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:21:46.867678    9828 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-800400
	
	I0719 05:21:46.867752    9828 ubuntu.go:169] provisioning hostname "newest-cni-800400"
	I0719 05:21:46.885960    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:47.072019    9828 main.go:141] libmachine: Using SSH client type: native
	I0719 05:21:47.073017    9828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52755 <nil> <nil>}
	I0719 05:21:47.073017    9828 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-800400 && echo "newest-cni-800400" | sudo tee /etc/hostname
	I0719 05:21:47.281806    9828 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-800400
	
	I0719 05:21:47.292821    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:47.507601    9828 main.go:141] libmachine: Using SSH client type: native
	I0719 05:21:47.508796    9828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52755 <nil> <nil>}
	I0719 05:21:47.508992    9828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800400/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:21:47.681259    9828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:21:47.681325    9828 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0719 05:21:47.681325    9828 ubuntu.go:177] setting up certificates
	I0719 05:21:47.681325    9828 provision.go:84] configureAuth start
	I0719 05:21:47.693729    9828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800400
	I0719 05:21:47.889647    9828 provision.go:143] copyHostCerts
	I0719 05:21:47.890317    9828 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:21:47.890317    9828 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0719 05:21:47.891113    9828 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0719 05:21:47.892426    9828 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:21:47.892426    9828 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0719 05:21:47.892426    9828 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:21:47.893697    9828 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:21:47.893697    9828 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0719 05:21:47.894580    9828 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:21:47.895374    9828 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-800400 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-800400]
	I0719 05:21:48.573925    9828 provision.go:177] copyRemoteCerts
	I0719 05:21:48.589538    9828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:21:48.598198    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:48.762366    9828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52755 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:21:48.885568    9828 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 05:21:48.945351    9828 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:21:48.989407    9828 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 05:21:49.036634    9828 provision.go:87] duration metric: took 1.3551665s to configureAuth
	I0719 05:21:49.036634    9828 ubuntu.go:193] setting minikube options for container-runtime
	I0719 05:21:49.037426    9828 config.go:182] Loaded profile config "newest-cni-800400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 05:21:49.051396    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:49.238399    9828 main.go:141] libmachine: Using SSH client type: native
	I0719 05:21:49.239100    9828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52755 <nil> <nil>}
	I0719 05:21:49.239100    9828 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:21:49.417213    9828 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0719 05:21:49.417424    9828 ubuntu.go:71] root file system type: overlay
	I0719 05:21:49.417680    9828 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:21:49.427614    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:49.623049    9828 main.go:141] libmachine: Using SSH client type: native
	I0719 05:21:49.623583    9828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52755 <nil> <nil>}
	I0719 05:21:49.623652    9828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:21:49.839500    9828 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:21:49.853322    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:50.039615    9828 main.go:141] libmachine: Using SSH client type: native
	I0719 05:21:50.039615    9828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142aa40] 0x142d620 <nil>  [] 0s} 127.0.0.1 52755 <nil> <nil>}
	I0719 05:21:50.040157    9828 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:21:46.865610    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:48.941374    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:52.219055    9828 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-06-29 00:00:53.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-07-19 05:21:49.824160591 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0719 05:21:52.219143    9828 machine.go:97] duration metric: took 5.7572858s to provisionDockerMachine
	I0719 05:21:52.219192    9828 client.go:171] duration metric: took 44.2036706s to LocalClient.Create
	I0719 05:21:52.219232    9828 start.go:167] duration metric: took 44.2037108s to libmachine.API.Create "newest-cni-800400"
	I0719 05:21:52.219325    9828 start.go:293] postStartSetup for "newest-cni-800400" (driver="docker")
	I0719 05:21:52.219375    9828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:21:52.236746    9828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:21:52.248585    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:52.440537    9828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52755 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:21:52.589427    9828 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:21:52.600299    9828 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0719 05:21:52.600299    9828 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0719 05:21:52.600299    9828 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0719 05:21:52.600299    9828 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0719 05:21:52.600299    9828 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0719 05:21:52.600299    9828 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0719 05:21:52.601614    9828 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem -> 109722.pem in /etc/ssl/certs
	I0719 05:21:52.616240    9828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:21:52.642258    9828 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\109722.pem --> /etc/ssl/certs/109722.pem (1708 bytes)
	I0719 05:21:52.694796    9828 start.go:296] duration metric: took 475.4666ms for postStartSetup
	I0719 05:21:52.707919    9828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800400
	I0719 05:21:52.905760    9828 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\newest-cni-800400\config.json ...
	I0719 05:21:52.921961    9828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 05:21:52.931362    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:53.123259    9828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52755 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:21:53.270139    9828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0719 05:21:53.283860    9828 start.go:128] duration metric: took 45.2764559s to createHost
	I0719 05:21:53.283860    9828 start.go:83] releasing machines lock for "newest-cni-800400", held for 45.2764559s
	I0719 05:21:53.294736    9828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800400
	I0719 05:21:53.481320    9828 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:21:53.492561    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:53.492561    9828 ssh_runner.go:195] Run: cat /version.json
	I0719 05:21:53.503688    9828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800400
	I0719 05:21:53.698249    9828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52755 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	I0719 05:21:53.712084    9828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52755 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\newest-cni-800400\id_rsa Username:docker}
	W0719 05:21:53.820985    9828 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:21:53.829990    9828 ssh_runner.go:195] Run: systemctl --version
	I0719 05:21:53.860885    9828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:21:53.892668    9828 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0719 05:21:53.918548    9828 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0719 05:21:53.934263    9828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0719 05:21:53.941300    9828 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0719 05:21:53.941380    9828 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 05:21:54.032731    9828 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:21:54.032816    9828 start.go:495] detecting cgroup driver to use...
	I0719 05:21:54.032816    9828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 05:21:54.033116    9828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:21:54.097897    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0719 05:21:54.140454    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:21:54.162314    9828 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:21:54.175648    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:21:54.224644    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:21:54.268720    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:21:54.303541    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:21:54.338613    9828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:21:54.388864    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:21:54.431716    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 05:21:54.471434    9828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 05:21:54.509544    9828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:21:54.549895    9828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:21:54.588755    9828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:21:54.770356    9828 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:21:55.004211    9828 start.go:495] detecting cgroup driver to use...
	I0719 05:21:55.004258    9828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 05:21:55.023055    9828 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:21:55.053040    9828 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0719 05:21:55.080438    9828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:21:55.124568    9828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:21:55.186566    9828 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:21:55.217568    9828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:21:55.239573    9828 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0719 05:21:55.296562    9828 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:21:51.609634    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:53.863867    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:55.865083    8448 pod_ready.go:102] pod "metrics-server-9975d5f86-mzhrf" in "kube-system" namespace has status "Ready":"False"
	I0719 05:21:55.561932    9828 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:21:55.794227    9828 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:21:55.794618    9828 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:21:55.865083    9828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:21:56.079185    9828 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:21:57.057547    9828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 05:21:57.118803    9828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:21:57.170735    9828 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 05:21:57.370033    9828 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 05:21:57.579560    9828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:21:57.768937    9828 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 05:21:57.837967    9828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:21:57.880635    9828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:21:58.144689    9828 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 05:21:58.372101    9828 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 05:21:58.395883    9828 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 05:21:58.404825    9828 start.go:563] Will wait 60s for crictl version
	I0719 05:21:58.418811    9828 ssh_runner.go:195] Run: which crictl
	I0719 05:21:58.441788    9828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:21:58.548104    9828 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 05:21:58.560118    9828 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:21:58.639914    9828 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	
	
	==> Docker <==
	Jul 19 05:18:08 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:08.326892619Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:18:08 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:08.326986531Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:18:08 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:08.343459563Z" level=error msg="Handler for POST /v1.43/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:18:21 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:21.589421709Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:18:21 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:21.842545843Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:18:21 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:21.842781474Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:18:21 no-preload-857600 dockerd[1101]: time="2024-07-19T05:18:21.842823980Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:18:21 no-preload-857600 cri-dockerd[1387]: time="2024-07-19T05:18:21Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Jul 19 05:19:00 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:00.304343915Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:00 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:00.304508936Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:00 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:00.361402687Z" level=error msg="Handler for POST /v1.43/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:19:15 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:15.551518124Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:15 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:15.797091651Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:15 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:15.797425793Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:19:15 no-preload-857600 dockerd[1101]: time="2024-07-19T05:19:15.797615617Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:19:15 no-preload-857600 cri-dockerd[1387]: time="2024-07-19T05:19:15Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Jul 19 05:20:35 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:35.296254322Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:35 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:35.296531058Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:35 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:35.313326811Z" level=error msg="Handler for POST /v1.43/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 19 05:20:41 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:41.525733798Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:41.785965538Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:41.786629523Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 dockerd[1101]: time="2024-07-19T05:20:41.787078080Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 19 05:20:41 no-preload-857600 cri-dockerd[1387]: time="2024-07-19T05:20:41Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Jul 19 05:21:38 no-preload-857600 dockerd[1101]: time="2024-07-19T05:21:38.392028036Z" level=error msg="Handler for POST /v1.46/containers/cccbfe71624e/pause returned error: cannot pause container cccbfe71624edef056c8c354cf0407e2da608a77084113a15e615dbcd101eab2: OCI runtime pause failed: unable to freeze: unknown"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f574e991ac3be       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        4 minutes ago       Running             kubernetes-dashboard      0                   86d5658d79966       kubernetes-dashboard-5cc9f66cf4-bxrdv
	a1035fa3f02e5       56cc512116c8f                                                                                         4 minutes ago       Running             busybox                   1                   d902f51a55618       busybox
	93e4e45355fa7       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   a83c2fcaf08a7       coredns-5cfdc65f69-7mss6
	0f5f9cd278b3b       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       1                   c5e888205c095       storage-provisioner
	e442a865f3cd3       c6c6581369906                                                                                         4 minutes ago       Running             kube-proxy                1                   6e4428c651605       kube-proxy-58tff
	1996f8977b081       63cf9a9f4bf5d                                                                                         5 minutes ago       Running             kube-controller-manager   1                   f551e032953ec       kube-controller-manager-no-preload-857600
	0e6a091da1adc       f9a39d2c9991a                                                                                         5 minutes ago       Running             kube-apiserver            1                   c041bebd04954       kube-apiserver-no-preload-857600
	cccbfe71624ed       cfec37af81d91                                                                                         5 minutes ago       Running             etcd                      1                   605313d29ef17       etcd-no-preload-857600
	5ab315262a227       d2edabc17c519                                                                                         5 minutes ago       Running             kube-scheduler            1                   8bcd2e14ae482       kube-scheduler-no-preload-857600
	6db3ce7491b6d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   687f415cfa434       busybox
	5a57bcf51ab50       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       0                   1b6fc12aeb61f       storage-provisioner
	9ce07a413e207       cbb01a7bd410d                                                                                         6 minutes ago       Exited              coredns                   0                   b9a6680f718ab       coredns-5cfdc65f69-7mss6
	56deee6eea2de       c6c6581369906                                                                                         6 minutes ago       Exited              kube-proxy                0                   cac5f784fc987       kube-proxy-58tff
	1f63215b46970       63cf9a9f4bf5d                                                                                         7 minutes ago       Exited              kube-controller-manager   0                   8c05d42bb488c       kube-controller-manager-no-preload-857600
	85b22019ef172       d2edabc17c519                                                                                         7 minutes ago       Exited              kube-scheduler            0                   ecc094e07a4f8       kube-scheduler-no-preload-857600
	4566c20bc227e       cfec37af81d91                                                                                         7 minutes ago       Exited              etcd                      0                   3766100ed5629       etcd-no-preload-857600
	a1f089136dfd4       f9a39d2c9991a                                                                                         7 minutes ago       Exited              kube-apiserver            0                   b40324a506331       kube-apiserver-no-preload-857600
	
	
	==> coredns [93e4e45355fa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41389 - 47107 "HINFO IN 1402704032434561587.771532936133455709. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.075681197s
	
	
	==> coredns [9ce07a413e20] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1848295072]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 05:15:18.296) (total time: 21027ms):
	Trace[1848295072]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21026ms (05:15:39.323)
	Trace[1848295072]: [21.027234486s] [21.027234486s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1453536071]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 05:15:18.296) (total time: 21028ms):
	Trace[1453536071]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21028ms (05:15:39.324)
	Trace[1453536071]: [21.028310725s] [21.028310725s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[910279935]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 05:15:18.296) (total time: 21028ms):
	Trace[910279935]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21028ms (05:15:39.325)
	Trace[910279935]: [21.028495349s] [21.028495349s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[Jul19 04:58] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul19 05:01] tmpfs: Unknown parameter 'noswap'
	[  +6.617016] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:05] tmpfs: Unknown parameter 'noswap'
	[ +13.843159] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:14] tmpfs: Unknown parameter 'noswap'
	[ +12.988281] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:16] tmpfs: Unknown parameter 'noswap'
	[Jul19 05:21] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [4566c20bc227] <==
	{"level":"info","ts":"2024-07-19T05:16:05.181333Z","caller":"traceutil/trace.go:171","msg":"trace[890466085] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"133.411667ms","start":"2024-07-19T05:16:05.047892Z","end":"2024-07-19T05:16:05.181304Z","steps":["trace[890466085] 'process raft request'  (duration: 94.62906ms)","trace[890466085] 'compare'  (duration: 38.622086ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:16:05.190419Z","caller":"traceutil/trace.go:171","msg":"trace[1350450407] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"120.066009ms","start":"2024-07-19T05:16:05.07033Z","end":"2024-07-19T05:16:05.190396Z","steps":["trace[1350450407] 'process raft request'  (duration: 119.76757ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.190596Z","caller":"traceutil/trace.go:171","msg":"trace[574223381] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"115.380992ms","start":"2024-07-19T05:16:05.075205Z","end":"2024-07-19T05:16:05.190586Z","steps":["trace[574223381] 'process raft request'  (duration: 115.145161ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.190544Z","caller":"traceutil/trace.go:171","msg":"trace[272282916] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"116.105787ms","start":"2024-07-19T05:16:05.074422Z","end":"2024-07-19T05:16:05.190528Z","steps":["trace[272282916] 'process raft request'  (duration: 115.859255ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.190792Z","caller":"traceutil/trace.go:171","msg":"trace[1050857287] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"118.402491ms","start":"2024-07-19T05:16:05.072369Z","end":"2024-07-19T05:16:05.190772Z","steps":["trace[1050857287] 'process raft request'  (duration: 117.855519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:16:05.341807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.856383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5144"}
	{"level":"info","ts":"2024-07-19T05:16:05.342035Z","caller":"traceutil/trace.go:171","msg":"trace[1764673753] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:509; }","duration":"139.095615ms","start":"2024-07-19T05:16:05.202917Z","end":"2024-07-19T05:16:05.342013Z","steps":["trace[1764673753] 'agreement among raft nodes before linearized reading'  (duration: 138.778373ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.342288Z","caller":"traceutil/trace.go:171","msg":"trace[1324801329] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"139.29124ms","start":"2024-07-19T05:16:05.202954Z","end":"2024-07-19T05:16:05.342245Z","steps":["trace[1324801329] 'process raft request'  (duration: 138.660057ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.342427Z","caller":"traceutil/trace.go:171","msg":"trace[1603121790] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"139.390054ms","start":"2024-07-19T05:16:05.203013Z","end":"2024-07-19T05:16:05.342403Z","steps":["trace[1603121790] 'process raft request'  (duration: 138.639555ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.342496Z","caller":"traceutil/trace.go:171","msg":"trace[1578776893] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"146.699016ms","start":"2024-07-19T05:16:05.195781Z","end":"2024-07-19T05:16:05.34248Z","steps":["trace[1578776893] 'process raft request'  (duration: 98.702396ms)","trace[1578776893] 'compare'  (duration: 46.845069ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:16:05.342335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.549148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-p4shw\" ","response":"range_response_count:1 size:2947"}
	{"level":"info","ts":"2024-07-19T05:16:05.342613Z","caller":"traceutil/trace.go:171","msg":"trace[253359815] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-78fcd8795b-p4shw; range_end:; response_count:1; response_revision:509; }","duration":"135.813083ms","start":"2024-07-19T05:16:05.206773Z","end":"2024-07-19T05:16:05.342586Z","steps":["trace[253359815] 'agreement among raft nodes before linearized reading'  (duration: 135.516644ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.342641Z","caller":"traceutil/trace.go:171","msg":"trace[1653564789] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"144.000061ms","start":"2024-07-19T05:16:05.19862Z","end":"2024-07-19T05:16:05.34262Z","steps":["trace[1653564789] 'process raft request'  (duration: 142.937321ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:05.710152Z","caller":"traceutil/trace.go:171","msg":"trace[370121252] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"116.819082ms","start":"2024-07-19T05:16:05.5933Z","end":"2024-07-19T05:16:05.710119Z","steps":["trace[370121252] 'process raft request'  (duration: 52.225777ms)","trace[370121252] 'compare'  (duration: 64.315969ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:16:06.789667Z","caller":"traceutil/trace.go:171","msg":"trace[2103382825] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"108.581398ms","start":"2024-07-19T05:16:06.681058Z","end":"2024-07-19T05:16:06.78964Z","steps":["trace[2103382825] 'process raft request'  (duration: 108.377571ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:16:07.667203Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T05:16:07.667658Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"no-preload-857600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	{"level":"warn","ts":"2024-07-19T05:16:07.667827Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:16:07.667987Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:16:07.86732Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:16:07.867391Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T05:16:07.867489Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dfc97eb0aae75b33","current-leader-member-id":"dfc97eb0aae75b33"}
	{"level":"info","ts":"2024-07-19T05:16:07.967152Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2024-07-19T05:16:07.967334Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2024-07-19T05:16:07.967441Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"no-preload-857600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	
	
	==> etcd [cccbfe71624e] <==
	{"level":"info","ts":"2024-07-19T05:21:32.832662Z","caller":"traceutil/trace.go:171","msg":"trace[1331832826] linearizableReadLoop","detail":"{readStateIndex:1035; appliedIndex:1033; }","duration":"209.692439ms","start":"2024-07-19T05:21:32.622953Z","end":"2024-07-19T05:21:32.832645Z","steps":["trace[1331832826] 'read index received'  (duration: 2.309187ms)","trace[1331832826] 'applied index is now lower than readState.Index'  (duration: 207.382352ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:21:32.832808Z","caller":"traceutil/trace.go:171","msg":"trace[321442553] transaction","detail":"{read_only:false; response_revision:946; number_of_response:1; }","duration":"448.141849ms","start":"2024-07-19T05:21:32.384644Z","end":"2024-07-19T05:21:32.832786Z","steps":["trace[321442553] 'process raft request'  (duration: 447.840612ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:32.832856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.892464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T05:21:32.832898Z","caller":"traceutil/trace.go:171","msg":"trace[1071415609] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:946; }","duration":"209.936669ms","start":"2024-07-19T05:21:32.622947Z","end":"2024-07-19T05:21:32.832884Z","steps":["trace[1071415609] 'agreement among raft nodes before linearized reading'  (duration: 209.867161ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:32.83306Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:21:32.384615Z","time spent":"448.250462ms","remote":"127.0.0.1:53118","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-857600\" mod_revision:938 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-857600\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-857600\" > >"}
	{"level":"warn","ts":"2024-07-19T05:21:33.123408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.281897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T05:21:33.123597Z","caller":"traceutil/trace.go:171","msg":"trace[467006304] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:946; }","duration":"168.501024ms","start":"2024-07-19T05:21:32.955076Z","end":"2024-07-19T05:21:33.123577Z","steps":["trace[467006304] 'count revisions from in-memory index tree'  (duration: 168.181384ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:21:33.417097Z","caller":"traceutil/trace.go:171","msg":"trace[1652786851] transaction","detail":"{read_only:false; response_revision:947; number_of_response:1; }","duration":"158.448276ms","start":"2024-07-19T05:21:33.258613Z","end":"2024-07-19T05:21:33.417062Z","steps":["trace[1652786851] 'process raft request'  (duration: 157.985918ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:21:35.720728Z","caller":"traceutil/trace.go:171","msg":"trace[528320762] linearizableReadLoop","detail":"{readStateIndex:1037; appliedIndex:1036; }","duration":"194.530156ms","start":"2024-07-19T05:21:35.526163Z","end":"2024-07-19T05:21:35.720693Z","steps":["trace[528320762] 'read index received'  (duration: 194.202115ms)","trace[528320762] 'applied index is now lower than readState.Index'  (duration: 327.141µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:21:35.720992Z","caller":"traceutil/trace.go:171","msg":"trace[1728522886] transaction","detail":"{read_only:false; response_revision:948; number_of_response:1; }","duration":"292.56443ms","start":"2024-07-19T05:21:35.428414Z","end":"2024-07-19T05:21:35.720978Z","steps":["trace[1728522886] 'process raft request'  (duration: 292.066168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:35.721215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.024618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T05:21:35.721289Z","caller":"traceutil/trace.go:171","msg":"trace[737382215] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:948; }","duration":"195.116029ms","start":"2024-07-19T05:21:35.526156Z","end":"2024-07-19T05:21:35.721272Z","steps":["trace[737382215] 'agreement among raft nodes before linearized reading'  (duration: 194.992714ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:37.768213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.172048ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571755476038499539 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:942 > success:<request_put:<key:\"/registry/masterleases/192.168.94.2\" value_size:65 lease:6571755476038499537 >> failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T05:21:37.768817Z","caller":"traceutil/trace.go:171","msg":"trace[1404558607] linearizableReadLoop","detail":"{readStateIndex:1039; appliedIndex:1038; }","duration":"149.098414ms","start":"2024-07-19T05:21:37.619701Z","end":"2024-07-19T05:21:37.768799Z","steps":["trace[1404558607] 'read index received'  (duration: 128.716µs)","trace[1404558607] 'applied index is now lower than readState.Index'  (duration: 148.967198ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T05:21:37.76913Z","caller":"traceutil/trace.go:171","msg":"trace[252856347] transaction","detail":"{read_only:false; response_revision:949; number_of_response:1; }","duration":"388.319921ms","start":"2024-07-19T05:21:37.380642Z","end":"2024-07-19T05:21:37.768962Z","steps":["trace[252856347] 'process raft request'  (duration: 55.205055ms)","trace[252856347] 'compare'  (duration: 331.996827ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:21:37.769454Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:21:37.380624Z","time spent":"388.542648ms","remote":"127.0.0.1:52870","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:942 > success:<request_put:<key:\"/registry/masterleases/192.168.94.2\" value_size:65 lease:6571755476038499537 >> failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >"}
	{"level":"warn","ts":"2024-07-19T05:21:37.76961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.897114ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T05:21:37.769776Z","caller":"traceutil/trace.go:171","msg":"trace[1853331938] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:949; }","duration":"150.065534ms","start":"2024-07-19T05:21:37.619694Z","end":"2024-07-19T05:21:37.769759Z","steps":["trace[1853331938] 'agreement among raft nodes before linearized reading'  (duration: 149.744895ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:21:39.074756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.187724189s","expected-duration":"100ms","prefix":"","request":"header:<ID:6571755476038499545 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:948 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T05:21:39.074941Z","caller":"traceutil/trace.go:171","msg":"trace[238508671] linearizableReadLoop","detail":"{readStateIndex:1040; appliedIndex:1039; }","duration":"1.246493546s","start":"2024-07-19T05:21:37.828434Z","end":"2024-07-19T05:21:39.074927Z","steps":["trace[238508671] 'read index received'  (duration: 58.470023ms)","trace[238508671] 'applied index is now lower than readState.Index'  (duration: 1.188021523s)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:21:39.075129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.246741876s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2024-07-19T05:21:39.075233Z","caller":"traceutil/trace.go:171","msg":"trace[1039528414] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:950; }","duration":"1.246844788s","start":"2024-07-19T05:21:37.828375Z","end":"2024-07-19T05:21:39.07522Z","steps":["trace[1039528414] 'agreement among raft nodes before linearized reading'  (duration: 1.246645164s)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:21:39.075232Z","caller":"traceutil/trace.go:171","msg":"trace[726508930] transaction","detail":"{read_only:false; response_revision:950; number_of_response:1; }","duration":"1.246632863s","start":"2024-07-19T05:21:37.828407Z","end":"2024-07-19T05:21:39.075039Z","steps":["trace[726508930] 'process raft request'  (duration: 58.444519ms)","trace[726508930] 'compare'  (duration: 1.186935697s)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:21:39.075358Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:21:37.828341Z","time spent":"1.247001906s","remote":"127.0.0.1:53134","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":498,"request content":"key:\"/registry/endpointslices/default/kubernetes\" "}
	{"level":"warn","ts":"2024-07-19T05:21:39.075502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:21:37.828388Z","time spent":"1.246962201s","remote":"127.0.0.1:53000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:948 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 05:22:11 up 2 days,  2:58,  0 users,  load average: 5.89, 7.37, 7.92
	Linux no-preload-857600 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [0e6a091da1ad] <==
	I0719 05:17:10.999930       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 05:17:11.512334       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.161.253"}
	I0719 05:17:13.170051       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.23.162"}
	I0719 05:17:14.268168       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 05:17:14.280847       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 05:17:14.661709       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	W0719 05:18:04.984327       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 05:18:04.984513       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 05:18:04.984646       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 05:18:04.984680       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 05:18:04.985951       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 05:18:04.986016       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 05:20:04.974046       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 05:20:04.974322       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 05:20:04.974470       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 05:20:04.974514       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 05:20:04.975800       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 05:20:04.975980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0719 05:21:37.776591       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="57.843383ms" method="GET" path="/readyz" result=null
	
	
	==> kube-apiserver [a1f089136dfd] <==
	W0719 05:16:17.035504       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.086994       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.114586       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.121571       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.139449       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.147140       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.154377       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.210236       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.238773       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.245391       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.265603       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.302596       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.303352       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.307117       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.316041       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.527271       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.530079       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.569579       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.598686       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.641568       1 logging.go:55] [core] [Channel #190 SubChannel #191]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.652833       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.664848       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.678191       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.700298       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:16:17.728043       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1996f8977b08] <==
	I0719 05:18:21.303825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="241.732µs"
	I0719 05:18:22.288232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="172.823µs"
	I0719 05:18:34.281553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="70.209µs"
	I0719 05:18:34.320621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="68.009µs"
	E0719 05:18:44.514103       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:18:44.624128       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 05:18:48.282508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="203.927µs"
	I0719 05:19:11.286782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="179.016µs"
	E0719 05:19:14.520388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:19:14.635225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 05:19:24.289218       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="135.518µs"
	I0719 05:19:27.271810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="57.007µs"
	I0719 05:19:40.266953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="80.208µs"
	E0719 05:19:44.527571       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:19:44.652317       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 05:20:14.534346       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:20:14.664310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 05:20:44.542211       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:20:44.677312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 05:20:46.260355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="90.812µs"
	I0719 05:20:55.266702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="85.111µs"
	I0719 05:21:01.265269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="96.213µs"
	I0719 05:21:10.308876       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8" duration="155.918µs"
	E0719 05:21:14.550800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 05:21:14.689423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [1f63215b4697] <==
	I0719 05:15:13.499682       1 shared_informer.go:320] Caches are synced for namespace
	I0719 05:15:13.503015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="202.953881ms"
	I0719 05:15:13.505302       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:15:13.576884       1 shared_informer.go:320] Caches are synced for service account
	I0719 05:15:13.576935       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:15:13.577095       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 05:15:13.600207       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="97.01081ms"
	I0719 05:15:13.600458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="72.61µs"
	I0719 05:15:13.606079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="290.138µs"
	I0719 05:15:13.773637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="86.812µs"
	I0719 05:15:15.496673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="203.991816ms"
	I0719 05:15:15.676180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="179.436724ms"
	I0719 05:15:15.676562       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="104.914µs"
	I0719 05:15:18.693402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="102.513µs"
	I0719 05:15:18.794680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="120.616µs"
	I0719 05:15:19.205852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-857600"
	I0719 05:15:29.011160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="108.714µs"
	I0719 05:15:29.438556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="145.619µs"
	I0719 05:15:29.480991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="52.707µs"
	I0719 05:15:46.557798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="456.519046ms"
	I0719 05:15:46.558145       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="180.523µs"
	I0719 05:16:05.196217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="467.801596ms"
	I0719 05:16:05.346723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="149.936843ms"
	I0719 05:16:05.346944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="69.009µs"
	I0719 05:16:05.419572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="240.332µs"
	
	
	==> kube-proxy [56deee6eea2d] <==
	E0719 05:15:17.610948       1 metrics.go:338] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0719 05:15:17.632640       1 metrics.go:338] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0719 05:15:17.686905       1 server_linux.go:67] "Using iptables proxy"
	I0719 05:15:18.092043       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0719 05:15:18.092374       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 05:15:18.285085       1 server.go:244] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 05:15:18.285284       1 server_linux.go:170] "Using iptables Proxier"
	I0719 05:15:18.292114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0719 05:15:18.313165       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0719 05:15:18.331624       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0719 05:15:18.331826       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 05:15:18.331854       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:15:18.371461       1 config.go:197] "Starting service config controller"
	I0719 05:15:18.371753       1 config.go:326] "Starting node config controller"
	I0719 05:15:18.371777       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:15:18.371794       1 config.go:104] "Starting endpoint slice config controller"
	I0719 05:15:18.371881       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:15:18.372141       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:15:18.473312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 05:15:18.473355       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:15:18.473618       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e442a865f3cd] <==
	E0719 05:17:16.940649       1 metrics.go:338] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0719 05:17:16.959301       1 metrics.go:338] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0719 05:17:17.002866       1 server_linux.go:67] "Using iptables proxy"
	I0719 05:17:17.745679       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0719 05:17:17.745891       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 05:17:17.805008       1 server.go:244] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 05:17:17.805162       1 server_linux.go:170] "Using iptables Proxier"
	I0719 05:17:17.810504       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0719 05:17:17.827741       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0719 05:17:17.844983       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0719 05:17:17.845525       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 05:17:17.845630       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:17:17.847492       1 config.go:197] "Starting service config controller"
	I0719 05:17:17.847614       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:17:17.847622       1 config.go:104] "Starting endpoint slice config controller"
	I0719 05:17:17.847636       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:17:17.851415       1 config.go:326] "Starting node config controller"
	I0719 05:17:17.851677       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:17:17.948644       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 05:17:17.948800       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:17:17.951911       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ab315262a22] <==
	I0719 05:16:57.982989       1 serving.go:386] Generated self-signed cert in-memory
	W0719 05:17:03.967698       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 05:17:03.972156       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0719 05:17:03.972187       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 05:17:03.972202       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 05:17:04.167050       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0719 05:17:04.167116       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:17:04.174126       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 05:17:04.174372       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 05:17:04.174413       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 05:17:04.177200       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0719 05:17:04.277921       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [85b22019ef17] <==
	E0719 05:15:04.619988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.678210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 05:15:04.678362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.784575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 05:15:04.784830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.829081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 05:15:04.829232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.930329       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 05:15:04.930531       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0719 05:15:04.947008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 05:15:04.947150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.948110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 05:15:04.948223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:04.999756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 05:15:04.999871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:05.130604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 05:15:05.130692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:05.239957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 05:15:05.240187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:05.269005       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 05:15:05.269127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:15:05.292680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 05:15:05.292857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0719 05:15:07.583846       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 05:16:07.785045       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 05:20:05 no-preload-857600 kubelet[1594]: E0719 05:20:05.250704    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:20:07 no-preload-857600 kubelet[1594]: E0719 05:20:07.246715    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:20:18 no-preload-857600 kubelet[1594]: E0719 05:20:18.241764    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:20:20 no-preload-857600 kubelet[1594]: E0719 05:20:20.243390    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:20:30 no-preload-857600 kubelet[1594]: E0719 05:20:30.245329    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:20:35 no-preload-857600 kubelet[1594]: E0719 05:20:35.314997    1594 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:35 no-preload-857600 kubelet[1594]: E0719 05:20:35.315201    1594 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:35 no-preload-857600 kubelet[1594]: E0719 05:20:35.315531    1594 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sb2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-78fcd8795b-p4shw_kube-system(65757233-14b9-4ca4-a918-803b5e39bfaf): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" logger="UnhandledError"
	Jul 19 05:20:35 no-preload-857600 kubelet[1594]: E0719 05:20:35.317013    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:20:41 no-preload-857600 kubelet[1594]: E0719 05:20:41.797179    1594 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 kubelet[1594]: E0719 05:20:41.797343    1594 kuberuntime_image.go:55] "Failed to pull image" err="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Jul 19 05:20:41 no-preload-857600 kubelet[1594]: E0719 05:20:41.797675    1594 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwjxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,Peri
odSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-64dbdb65b8-l8477_kubernetes-dashboard(77f02ca1-5339-4432-9842-e7f39377bfa5): ErrImagePull: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upg
rade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Jul 19 05:20:41 no-preload-857600 kubelet[1594]: E0719 05:20:41.800673    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:20:46 no-preload-857600 kubelet[1594]: E0719 05:20:46.239407    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:20:55 no-preload-857600 kubelet[1594]: E0719 05:20:55.240625    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:21:01 no-preload-857600 kubelet[1594]: E0719 05:21:01.241058    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:21:10 no-preload-857600 kubelet[1594]: E0719 05:21:10.243913    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:21:12 no-preload-857600 kubelet[1594]: E0719 05:21:12.237412    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:21:25 no-preload-857600 kubelet[1594]: E0719 05:21:25.238613    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:21:27 no-preload-857600 kubelet[1594]: E0719 05:21:27.239551    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-p4shw" podUID="65757233-14b9-4ca4-a918-803b5e39bfaf"
	Jul 19 05:21:37 no-preload-857600 kubelet[1594]: E0719 05:21:37.237059    1594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-64dbdb65b8-l8477" podUID="77f02ca1-5339-4432-9842-e7f39377bfa5"
	Jul 19 05:21:37 no-preload-857600 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jul 19 05:21:37 no-preload-857600 kubelet[1594]: I0719 05:21:37.669950    1594 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jul 19 05:21:37 no-preload-857600 systemd[1]: kubelet.service: Deactivated successfully.
	Jul 19 05:21:37 no-preload-857600 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f574e991ac3b] <==
	2024/07/19 05:17:43 Starting overwatch
	2024/07/19 05:17:43 Using namespace: kubernetes-dashboard
	2024/07/19 05:17:43 Using in-cluster config to connect to apiserver
	2024/07/19 05:17:43 Using secret token for csrf signing
	2024/07/19 05:17:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/19 05:17:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/19 05:17:43 Successful initial request to the apiserver, version: v1.31.0-beta.0
	2024/07/19 05:17:43 Generating JWE encryption key
	2024/07/19 05:17:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/19 05:17:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/19 05:17:44 Initializing JWE encryption key from synchronized object
	2024/07/19 05:17:44 Creating in-cluster Sidecar client
	2024/07/19 05:17:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:17:44 Serving insecurely on HTTP port: 9090
	2024/07/19 05:18:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:18:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:19:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:19:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:20:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:20:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/19 05:21:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0f5f9cd278b3] <==
	I0719 05:17:19.466662       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 05:17:19.489735       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 05:17:19.490074       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 05:17:37.475148       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 05:17:37.475647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b64b39a-5974-422e-aa31-9a75b12ec12a", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-857600_1d37b5c5-e8fc-44e9-95a6-51ecd66cfe98 became leader
	I0719 05:17:37.475793       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-857600_1d37b5c5-e8fc-44e9-95a6-51ecd66cfe98!
	I0719 05:17:37.577190       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-857600_1d37b5c5-e8fc-44e9-95a6-51ecd66cfe98!
	
	
	==> storage-provisioner [5a57bcf51ab5] <==
	I0719 05:15:19.604119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 05:15:19.620538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 05:15:19.620711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 05:15:19.642686       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 05:15:19.643107       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-857600_e42b9fe7-ba2d-4bfb-a81f-f5eae859c4c2!
	I0719 05:15:19.643689       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b64b39a-5974-422e-aa31-9a75b12ec12a", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-857600_e42b9fe7-ba2d-4bfb-a81f-f5eae859c4c2 became leader
	I0719 05:15:19.744661       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-857600_e42b9fe7-ba2d-4bfb-a81f-f5eae859c4c2!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:21:58.819349    7932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-857600 -n no-preload-857600
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-857600 -n no-preload-857600: exit status 2 (1.5111618s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:22:12.606679    2600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-857600" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (37.73s)

                                                
                                    

Test pass (315/348)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.88
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.76
9 TestDownloadOnly/v1.20.0/DeleteAll 2.67
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.39
12 TestDownloadOnly/v1.30.3/json-events 8.38
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.38
18 TestDownloadOnly/v1.30.3/DeleteAll 2.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 1.23
21 TestDownloadOnly/v1.31.0-beta.0/json-events 12.32
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.28
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 2.17
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 1.2
29 TestDownloadOnlyKic 3.97
30 TestBinaryMirror 3.62
31 TestOffline 227.12
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.28
36 TestAddons/Setup 596.78
40 TestAddons/parallel/InspektorGadget 14.08
41 TestAddons/parallel/MetricsServer 7.49
42 TestAddons/parallel/HelmTiller 16.93
44 TestAddons/parallel/CSI 88.59
45 TestAddons/parallel/Headlamp 34.73
46 TestAddons/parallel/CloudSpanner 7.91
47 TestAddons/parallel/LocalPath 60.24
48 TestAddons/parallel/NvidiaDevicePlugin 7.5
49 TestAddons/parallel/Yakd 6.06
50 TestAddons/parallel/Volcano 58.74
53 TestAddons/serial/GCPAuth/Namespaces 0.39
54 TestAddons/StoppedEnableDisable 15.17
55 TestCertOptions 96.94
56 TestCertExpiration 367.44
57 TestDockerFlags 121.33
58 TestForceSystemdFlag 176.33
59 TestForceSystemdEnv 185.96
66 TestErrorSpam/start 4.08
67 TestErrorSpam/status 4.03
68 TestErrorSpam/pause 4.14
69 TestErrorSpam/unpause 5.45
70 TestErrorSpam/stop 20.66
73 TestFunctional/serial/CopySyncFile 0.03
74 TestFunctional/serial/StartWithProxy 86.49
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 48.29
77 TestFunctional/serial/KubeContext 0.14
78 TestFunctional/serial/KubectlGetPods 0.25
81 TestFunctional/serial/CacheCmd/cache/add_remote 7.65
82 TestFunctional/serial/CacheCmd/cache/add_local 4.55
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
84 TestFunctional/serial/CacheCmd/cache/list 0.25
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.25
86 TestFunctional/serial/CacheCmd/cache/cache_reload 5.71
87 TestFunctional/serial/CacheCmd/cache/delete 0.51
88 TestFunctional/serial/MinikubeKubectlCmd 0.53
90 TestFunctional/serial/ExtraConfig 56.35
91 TestFunctional/serial/ComponentHealth 0.19
92 TestFunctional/serial/LogsCmd 2.85
93 TestFunctional/serial/LogsFileCmd 3.05
94 TestFunctional/serial/InvalidService 5.91
98 TestFunctional/parallel/DryRun 3.95
99 TestFunctional/parallel/InternationalLanguage 1.37
100 TestFunctional/parallel/StatusCmd 4.96
105 TestFunctional/parallel/AddonsCmd 1.02
106 TestFunctional/parallel/PersistentVolumeClaim 100.74
108 TestFunctional/parallel/SSHCmd 3.1
109 TestFunctional/parallel/CpCmd 10.45
110 TestFunctional/parallel/MySQL 76.19
111 TestFunctional/parallel/FileSync 1.34
112 TestFunctional/parallel/CertSync 10.79
116 TestFunctional/parallel/NodeLabels 0.32
118 TestFunctional/parallel/NonActiveRuntimeDisabled 1.92
120 TestFunctional/parallel/License 5.16
121 TestFunctional/parallel/ServiceCmd/DeployApp 26.62
122 TestFunctional/parallel/Version/short 0.28
123 TestFunctional/parallel/Version/components 2.14
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.93
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.95
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.94
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.94
128 TestFunctional/parallel/ImageCommands/ImageBuild 9.88
129 TestFunctional/parallel/ImageCommands/Setup 3.15
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.01
131 TestFunctional/parallel/DockerEnv/powershell 12.13
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.12
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.7
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.74
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.72
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.55
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.13
138 TestFunctional/parallel/ImageCommands/ImageRemove 2.24
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.34
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 2.84
142 TestFunctional/parallel/ServiceCmd/List 2.65
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 36.29
146 TestFunctional/parallel/ServiceCmd/JSONOutput 2.98
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.44
148 TestFunctional/parallel/ServiceCmd/HTTPS 15.03
149 TestFunctional/parallel/ProfileCmd/profile_not_create 2.14
150 TestFunctional/parallel/ProfileCmd/profile_list 1.94
151 TestFunctional/parallel/ProfileCmd/profile_json_output 2.09
152 TestFunctional/parallel/ServiceCmd/Format 15.03
153 TestFunctional/parallel/ServiceCmd/URL 15.02
154 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.3
159 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.23
160 TestFunctional/delete_echo-server_images 0.39
161 TestFunctional/delete_my-image_image 0.17
162 TestFunctional/delete_minikube_cached_images 0.2
166 TestMultiControlPlane/serial/StartCluster 264.41
167 TestMultiControlPlane/serial/DeployApp 27.4
168 TestMultiControlPlane/serial/PingHostFromPods 3.82
169 TestMultiControlPlane/serial/AddWorkerNode 74.35
170 TestMultiControlPlane/serial/NodeLabels 0.24
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 3.82
172 TestMultiControlPlane/serial/CopyFile 77.79
173 TestMultiControlPlane/serial/StopSecondaryNode 15.59
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 2.77
175 TestMultiControlPlane/serial/RestartSecondaryNode 159.27
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.71
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 275.35
178 TestMultiControlPlane/serial/DeleteSecondaryNode 23.95
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.54
180 TestMultiControlPlane/serial/StopCluster 38.46
181 TestMultiControlPlane/serial/RestartCluster 164.68
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.56
183 TestMultiControlPlane/serial/AddSecondaryNode 91.84
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 3.61
187 TestImageBuild/serial/Setup 77.93
188 TestImageBuild/serial/NormalBuild 4.14
189 TestImageBuild/serial/BuildWithBuildArg 2.72
190 TestImageBuild/serial/BuildWithDockerIgnore 1.75
191 TestImageBuild/serial/BuildWithSpecifiedDockerfile 2.04
195 TestJSONOutput/start/Command 116.42
196 TestJSONOutput/start/Audit 0
198 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/pause/Command 1.73
202 TestJSONOutput/pause/Audit 0
204 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/unpause/Command 1.62
208 TestJSONOutput/unpause/Audit 0
210 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
213 TestJSONOutput/stop/Command 13.02
214 TestJSONOutput/stop/Audit 0
216 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
217 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
218 TestErrorJSONOutput 1.52
220 TestKicCustomNetwork/create_custom_network 83.28
221 TestKicCustomNetwork/use_default_bridge_network 82.11
222 TestKicExistingNetwork 81.12
223 TestKicCustomSubnet 82.64
224 TestKicStaticIP 85.16
225 TestMainNoArgs 0.24
226 TestMinikubeProfile 169.34
229 TestMountStart/serial/StartWithMountFirst 33.08
230 TestMountStart/serial/VerifyMountFirst 1.16
231 TestMountStart/serial/StartWithMountSecond 31.81
232 TestMountStart/serial/VerifyMountSecond 1.19
233 TestMountStart/serial/DeleteFirst 4.14
234 TestMountStart/serial/VerifyMountPostDelete 1.16
235 TestMountStart/serial/Stop 2.6
236 TestMountStart/serial/RestartStopped 23.52
237 TestMountStart/serial/VerifyMountPostStop 1.22
240 TestMultiNode/serial/FreshStart2Nodes 183.62
241 TestMultiNode/serial/DeployApp2Nodes 37.76
242 TestMultiNode/serial/PingHostFrom2Pods 2.59
243 TestMultiNode/serial/AddNode 67.51
244 TestMultiNode/serial/MultiNodeLabels 0.2
245 TestMultiNode/serial/ProfileList 1.52
246 TestMultiNode/serial/CopyFile 42.84
247 TestMultiNode/serial/StopNode 7
248 TestMultiNode/serial/StartAfterStop 29.66
249 TestMultiNode/serial/RestartKeepsNodes 141.87
250 TestMultiNode/serial/DeleteNode 14.16
251 TestMultiNode/serial/StopMultiNode 25.84
252 TestMultiNode/serial/RestartMultiNode 80.03
253 TestMultiNode/serial/ValidateNameConflict 77.13
257 TestPreload 191.71
258 TestScheduledStopWindows 140.58
262 TestInsufficientStorage 56.28
263 TestRunningBinaryUpgrade 440.24
265 TestKubernetesUpgrade 596.52
266 TestMissingContainerUpgrade 334.77
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.34
269 TestStoppedBinaryUpgrade/Setup 1.83
270 TestNoKubernetes/serial/StartWithK8s 139.08
271 TestStoppedBinaryUpgrade/Upgrade 416.48
272 TestNoKubernetes/serial/StartWithStopK8s 67.64
273 TestNoKubernetes/serial/Start 52.49
274 TestNoKubernetes/serial/VerifyK8sNotRunning 1.26
275 TestNoKubernetes/serial/ProfileList 12.43
276 TestNoKubernetes/serial/Stop 6.83
277 TestNoKubernetes/serial/StartNoArgs 16.26
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.25
290 TestStoppedBinaryUpgrade/MinikubeLogs 5.02
299 TestPause/serial/Start 157.82
300 TestPause/serial/SecondStartNoReconfiguration 71.18
301 TestPause/serial/Pause 2.08
302 TestPause/serial/VerifyStatus 1.56
303 TestPause/serial/Unpause 1.87
304 TestPause/serial/PauseAgain 2.18
305 TestPause/serial/DeletePaused 41.33
307 TestNetworkPlugins/group/auto/Start 153.01
308 TestNetworkPlugins/group/kindnet/Start 194.82
309 TestNetworkPlugins/group/auto/KubeletFlags 1.7
310 TestNetworkPlugins/group/auto/NetCatPod 21.73
311 TestNetworkPlugins/group/calico/Start 223.55
312 TestNetworkPlugins/group/auto/DNS 0.42
313 TestNetworkPlugins/group/auto/Localhost 0.36
314 TestNetworkPlugins/group/auto/HairPin 0.46
315 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
316 TestNetworkPlugins/group/kindnet/KubeletFlags 1.65
317 TestNetworkPlugins/group/kindnet/NetCatPod 29.06
318 TestNetworkPlugins/group/custom-flannel/Start 156.58
319 TestNetworkPlugins/group/kindnet/DNS 0.38
320 TestNetworkPlugins/group/kindnet/Localhost 0.37
321 TestNetworkPlugins/group/kindnet/HairPin 0.35
322 TestNetworkPlugins/group/false/Start 113.36
323 TestNetworkPlugins/group/calico/ControllerPod 5.03
324 TestNetworkPlugins/group/calico/KubeletFlags 1.36
325 TestNetworkPlugins/group/calico/NetCatPod 20.67
326 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.4
327 TestNetworkPlugins/group/custom-flannel/NetCatPod 20.86
328 TestNetworkPlugins/group/calico/DNS 0.52
329 TestNetworkPlugins/group/calico/Localhost 0.42
330 TestNetworkPlugins/group/calico/HairPin 0.39
331 TestNetworkPlugins/group/custom-flannel/DNS 0.58
332 TestNetworkPlugins/group/custom-flannel/Localhost 0.37
333 TestNetworkPlugins/group/custom-flannel/HairPin 0.38
334 TestNetworkPlugins/group/false/KubeletFlags 1.48
335 TestNetworkPlugins/group/false/NetCatPod 23.85
336 TestNetworkPlugins/group/enable-default-cni/Start 181.68
337 TestNetworkPlugins/group/flannel/Start 161.6
338 TestNetworkPlugins/group/false/DNS 0.41
339 TestNetworkPlugins/group/false/Localhost 0.42
340 TestNetworkPlugins/group/false/HairPin 0.41
341 TestNetworkPlugins/group/bridge/Start 138.4
342 TestNetworkPlugins/group/kubenet/Start 135.83
343 TestNetworkPlugins/group/flannel/ControllerPod 6.02
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.37
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 22.7
346 TestNetworkPlugins/group/flannel/KubeletFlags 2.1
347 TestNetworkPlugins/group/bridge/KubeletFlags 2.25
348 TestNetworkPlugins/group/flannel/NetCatPod 24.14
349 TestNetworkPlugins/group/bridge/NetCatPod 23.17
350 TestNetworkPlugins/group/enable-default-cni/DNS 0.44
351 TestNetworkPlugins/group/enable-default-cni/Localhost 0.38
352 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
353 TestNetworkPlugins/group/flannel/DNS 0.42
354 TestNetworkPlugins/group/flannel/Localhost 0.3
355 TestNetworkPlugins/group/flannel/HairPin 0.48
356 TestNetworkPlugins/group/bridge/DNS 0.43
357 TestNetworkPlugins/group/bridge/Localhost 0.44
358 TestNetworkPlugins/group/bridge/HairPin 0.59
359 TestNetworkPlugins/group/kubenet/KubeletFlags 1.99
360 TestNetworkPlugins/group/kubenet/NetCatPod 27.06
361 TestNetworkPlugins/group/kubenet/DNS 0.63
362 TestNetworkPlugins/group/kubenet/Localhost 0.55
363 TestNetworkPlugins/group/kubenet/HairPin 0.56
365 TestStartStop/group/old-k8s-version/serial/FirstStart 293.96
367 TestStartStop/group/no-preload/serial/FirstStart 216.34
369 TestStartStop/group/embed-certs/serial/FirstStart 174.18
371 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 133.8
372 TestStartStop/group/embed-certs/serial/DeployApp 10.92
373 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.34
374 TestStartStop/group/embed-certs/serial/Stop 13.17
375 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.79
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.33
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.97
378 TestStartStop/group/embed-certs/serial/SecondStart 294.72
379 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.36
380 TestStartStop/group/no-preload/serial/DeployApp 12.84
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.29
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 297.88
383 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 5.49
384 TestStartStop/group/no-preload/serial/Stop 16.17
385 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.51
386 TestStartStop/group/no-preload/serial/SecondStart 298.83
387 TestStartStop/group/old-k8s-version/serial/DeployApp 15.5
388 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 4.18
389 TestStartStop/group/old-k8s-version/serial/Stop 14.33
390 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.27
392 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
393 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.43
394 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.94
395 TestStartStop/group/embed-certs/serial/Pause 10.17
396 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.02
397 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.52
399 TestStartStop/group/newest-cni/serial/FirstStart 81.92
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.06
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 11.73
402 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.03
403 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.49
404 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1.03
406 TestStartStop/group/newest-cni/serial/DeployApp 0
407 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.63
408 TestStartStop/group/newest-cni/serial/Stop 13.09
409 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.19
410 TestStartStop/group/newest-cni/serial/SecondStart 46.99
411 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
412 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
413 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.12
414 TestStartStop/group/newest-cni/serial/Pause 11.77
415 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.02
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.55
417 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.88
418 TestStartStop/group/old-k8s-version/serial/Pause 10.03
x
+
TestDownloadOnly/v1.20.0/json-events (10.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-081200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-081200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (10.8806425s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-081200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-081200: exit status 85 (758.9272ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-081200 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |          |
	|         | -p download-only-081200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:26:47
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:26:47.421334   10832 out.go:291] Setting OutFile to fd 628 ...
	I0719 03:26:47.422351   10832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:26:47.422351   10832 out.go:304] Setting ErrFile to fd 636...
	I0719 03:26:47.422351   10832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 03:26:47.435326   10832 root.go:314] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0719 03:26:47.448329   10832 out.go:298] Setting JSON to true
	I0719 03:26:47.452326   10832 start.go:129] hostinfo: {"hostname":"minikube3","uptime":176592,"bootTime":1721183014,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 03:26:47.452326   10832 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:26:47.472346   10832 out.go:97] [download-only-081200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	W0719 03:26:47.473298   10832 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0719 03:26:47.473298   10832 notify.go:220] Checking for updates...
	I0719 03:26:47.476391   10832 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:26:47.481967   10832 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 03:26:47.489169   10832 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:26:47.495305   10832 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0719 03:26:47.502958   10832 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:26:47.503868   10832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:26:47.794776   10832 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 03:26:47.806960   10832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:26:49.118293   10832 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.3107873s)
	I0719 03:26:49.119122   10832 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:81 SystemTime:2024-07-19 03:26:49.065209422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:26:49.132145   10832 out.go:97] Using the docker driver based on user configuration
	I0719 03:26:49.133153   10832 start.go:297] selected driver: docker
	I0719 03:26:49.133297   10832 start.go:901] validating driver "docker" against <nil>
	I0719 03:26:49.152254   10832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:26:49.520829   10832 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:81 SystemTime:2024-07-19 03:26:49.464518049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:26:49.520829   10832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:26:49.643330   10832 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0719 03:26:49.644521   10832 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:26:49.652156   10832 out.go:169] Using Docker Desktop driver with root privileges
	I0719 03:26:49.657947   10832 cni.go:84] Creating CNI manager for ""
	I0719 03:26:49.657947   10832 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 03:26:49.658530   10832 start.go:340] cluster config:
	{Name:download-only-081200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-081200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:26:49.664273   10832 out.go:97] Starting "download-only-081200" primary control-plane node in "download-only-081200" cluster
	I0719 03:26:49.664273   10832 cache.go:121] Beginning downloading kic base image for docker with docker
	I0719 03:26:49.671106   10832 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 03:26:49.671522   10832 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 03:26:49.671522   10832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 03:26:49.729351   10832 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0719 03:26:49.729351   10832 cache.go:56] Caching tarball of preloaded images
	I0719 03:26:49.730039   10832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 03:26:49.736605   10832 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 03:26:49.736605   10832 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:26:49.849666   10832 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:26:49.849666   10832 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:26:49.849666   10832 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:26:49.849666   10832 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 03:26:49.851114   10832 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:26:49.852798   10832 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0719 03:26:54.487859   10832 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:26:54.489670   10832 preload.go:254] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:26:55.708661   10832 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 03:26:55.709637   10832 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-081200\config.json ...
	I0719 03:26:55.709637   10832 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-081200\config.json: {Name:mk1b7ee02eb618e94524e6ecc99182e6f755f089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:26:55.710393   10832 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 03:26:55.712528   10832 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	I0719 03:26:57.162115   10832 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	
	
	* The control-plane node download-only-081200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-081200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:26:58.284909   14612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (2.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.6699655s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (2.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-081200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-081200: (1.3877742s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (8.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-404100 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-404100 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker: (8.3775754s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (8.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-404100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-404100: exit status 85 (383.3353ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-081200 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | -p download-only-081200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-081200        | download-only-081200 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| start   | -o=json --download-only        | download-only-404100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | -p download-only-404100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:27:03
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:27:03.196232    2472 out.go:291] Setting OutFile to fd 764 ...
	I0719 03:27:03.197416    2472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:03.197416    2472 out.go:304] Setting ErrFile to fd 768...
	I0719 03:27:03.197416    2472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:03.220913    2472 out.go:298] Setting JSON to true
	I0719 03:27:03.224385    2472 start.go:129] hostinfo: {"hostname":"minikube3","uptime":176608,"bootTime":1721183014,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 03:27:03.224385    2472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:27:03.230961    2472 out.go:97] [download-only-404100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:27:03.231262    2472 notify.go:220] Checking for updates...
	I0719 03:27:03.235267    2472 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:27:03.240756    2472 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 03:27:03.245760    2472 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:27:03.251738    2472 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0719 03:27:03.258787    2472 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:27:03.259789    2472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:27:03.544760    2472 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 03:27:03.556527    2472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:27:04.074412    2472 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:81 SystemTime:2024-07-19 03:27:04.036810378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:27:04.220305    2472 out.go:97] Using the docker driver based on user configuration
	I0719 03:27:04.220521    2472 start.go:297] selected driver: docker
	I0719 03:27:04.220666    2472 start.go:901] validating driver "docker" against <nil>
	I0719 03:27:04.238099    2472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:27:04.592915    2472 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:81 SystemTime:2024-07-19 03:27:04.546232193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:27:04.593437    2472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:27:04.646234    2472 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0719 03:27:04.647081    2472 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:27:04.651869    2472 out.go:169] Using Docker Desktop driver with root privileges
	I0719 03:27:04.654914    2472 cni.go:84] Creating CNI manager for ""
	I0719 03:27:04.654914    2472 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:27:04.654914    2472 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:27:04.656099    2472 start.go:340] cluster config:
	{Name:download-only-404100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-404100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:27:04.658700    2472 out.go:97] Starting "download-only-404100" primary control-plane node in "download-only-404100" cluster
	I0719 03:27:04.658700    2472 cache.go:121] Beginning downloading kic base image for docker with docker
	I0719 03:27:04.662849    2472 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 03:27:04.662849    2472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:27:04.662849    2472 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 03:27:04.723572    2472 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:27:04.723642    2472 cache.go:56] Caching tarball of preloaded images
	I0719 03:27:04.723886    2472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:27:04.726683    2472 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 03:27:04.726793    2472 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:27:04.830490    2472 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:27:04.854851    2472 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:27:04.855400    2472 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:27:04.855506    2472 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:27:04.855506    2472 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 03:27:04.855506    2472 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 03:27:04.855506    2472 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 03:27:04.856207    2472 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	
	
	* The control-plane node download-only-404100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-404100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:27:11.500573    3476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (2.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.1433723s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (2.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-404100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-404100: (1.2243569s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (12.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-747100 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-747100 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker: (12.3200329s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (12.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-747100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-747100: exit status 85 (282.7395ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-081200 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | -p download-only-081200             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=docker                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-081200             | download-only-081200 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| start   | -o=json --download-only             | download-only-404100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | -p download-only-404100             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=docker                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-404100             | download-only-404100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| start   | -o=json --download-only             | download-only-747100 | minikube3\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | -p download-only-747100             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=docker                     |                      |                   |         |                     |                     |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:27:15
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:27:15.305610    7632 out.go:291] Setting OutFile to fd 832 ...
	I0719 03:27:15.306608    7632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:15.306608    7632 out.go:304] Setting ErrFile to fd 836...
	I0719 03:27:15.306608    7632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:15.333237    7632 out.go:298] Setting JSON to true
	I0719 03:27:15.336430    7632 start.go:129] hostinfo: {"hostname":"minikube3","uptime":176620,"bootTime":1721183014,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 03:27:15.336430    7632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:27:15.345278    7632 out.go:97] [download-only-747100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:27:15.345547    7632 notify.go:220] Checking for updates...
	I0719 03:27:15.348799    7632 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:27:15.354384    7632 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 03:27:15.360481    7632 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:27:15.367385    7632 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0719 03:27:15.376027    7632 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:27:15.377041    7632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:27:15.671683    7632 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 03:27:15.681629    7632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:27:16.031203    7632 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:81 SystemTime:2024-07-19 03:27:15.976132912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:27:16.055134    7632 out.go:97] Using the docker driver based on user configuration
	I0719 03:27:16.055249    7632 start.go:297] selected driver: docker
	I0719 03:27:16.055308    7632 start.go:901] validating driver "docker" against <nil>
	I0719 03:27:16.075788    7632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:27:16.450118    7632 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:81 SystemTime:2024-07-19 03:27:16.400221045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:27:16.450330    7632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:27:16.503403    7632 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0719 03:27:16.505109    7632 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:27:16.516465    7632 out.go:169] Using Docker Desktop driver with root privileges
	I0719 03:27:16.519503    7632 cni.go:84] Creating CNI manager for ""
	I0719 03:27:16.519503    7632 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:27:16.520646    7632 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:27:16.520891    7632 start.go:340] cluster config:
	{Name:download-only-747100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-747100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseIn
terval:1m0s}
	I0719 03:27:16.524286    7632 out.go:97] Starting "download-only-747100" primary control-plane node in "download-only-747100" cluster
	I0719 03:27:16.524392    7632 cache.go:121] Beginning downloading kic base image for docker with docker
	I0719 03:27:16.527482    7632 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 03:27:16.527482    7632 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 03:27:16.527482    7632 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 03:27:16.600365    7632 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 03:27:16.600587    7632 cache.go:56] Caching tarball of preloaded images
	I0719 03:27:16.601158    7632 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 03:27:16.604479    7632 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 03:27:16.604601    7632 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:27:16.712754    7632 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 03:27:16.715749    7632 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 03:27:16.715893    7632 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:27:16.715953    7632 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721324606-19298@sha256_1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f.tar
	I0719 03:27:16.715953    7632 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 03:27:16.715953    7632 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 03:27:16.715953    7632 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 03:27:16.716667    7632 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 03:27:20.972599    7632 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:27:20.973822    7632 preload.go:254] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:27:21.849090    7632 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 03:27:21.849396    7632 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-747100\config.json ...
	I0719 03:27:21.850126    7632 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-747100\config.json: {Name:mkd5e67cd298d6cd7866c8a990d513880a76ec06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:21.851634    7632 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 03:27:21.851903    7632 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\amd64\v1.31.0-beta.0/kubectl.exe
	
	
	* The control-plane node download-only-747100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-747100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:27:27.558645    3864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (2.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.1726165s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (2.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-747100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-747100: (1.1991726s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.97s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-868300 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-868300 --alsologtostderr --driver=docker: (1.6022852s)
helpers_test.go:175: Cleaning up "download-docker-868300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-868300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-868300: (1.4257677s)
--- PASS: TestDownloadOnlyKic (3.97s)

                                                
                                    
x
+
TestBinaryMirror (3.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-381500 --alsologtostderr --binary-mirror http://127.0.0.1:62258 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-381500 --alsologtostderr --binary-mirror http://127.0.0.1:62258 --driver=docker: (1.9986528s)
helpers_test.go:175: Cleaning up "binary-mirror-381500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-381500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-381500: (1.3621917s)
--- PASS: TestBinaryMirror (3.62s)

                                                
                                    
x
+
TestOffline (227.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-430600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-430600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m39.4987441s)
helpers_test.go:175: Cleaning up "offline-docker-430600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-430600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-430600: (7.622072s)
--- PASS: TestOffline (227.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-172900
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-172900: exit status 85 (291.2516ms)

                                                
                                                
-- stdout --
	* Profile "addons-172900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-172900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:27:42.634440   10500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-172900
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-172900: exit status 85 (281.8844ms)

                                                
                                                
-- stdout --
	* Profile "addons-172900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-172900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:27:42.634440    5436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/Setup (596.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-172900 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-172900 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (9m56.7764978s)
--- PASS: TestAddons/Setup (596.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (14.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mgrfm" [91d28a2a-a069-414f-b714-d15e7413ce6e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0332916s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-172900
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-172900: (8.0396208s)
--- PASS: TestAddons/parallel/InspektorGadget (14.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 37.8572ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-sqffj" [aff9eb21-c143-40ec-b5e2-6722d136dff3] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0137614s
addons_test.go:417: (dbg) Run:  kubectl --context addons-172900 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-172900 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-172900 addons disable metrics-server --alsologtostderr -v=1: (2.2315887s)
--- PASS: TestAddons/parallel/MetricsServer (7.49s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.93s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 37.6025ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-8fwvt" [932fc701-12f7-457d-b3ff-d820f13bbe8e] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0125874s
addons_test.go:475: (dbg) Run:  kubectl --context addons-172900 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-172900 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.7069042s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-172900 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-172900 addons disable helm-tiller --alsologtostderr -v=1: (2.1387892s)
--- PASS: TestAddons/parallel/HelmTiller (16.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (88.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 37.0373ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-172900 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-172900 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bf2f6e4a-4edc-4e3e-8940-192ce7fb97f0] Pending
helpers_test.go:344: "task-pv-pod" [bf2f6e4a-4edc-4e3e-8940-192ce7fb97f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bf2f6e4a-4edc-4e3e-8940-192ce7fb97f0] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 35.0685573s
addons_test.go:586: (dbg) Run:  kubectl --context addons-172900 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-172900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-172900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-172900 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-172900 delete pod task-pv-pod: (4.4784611s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-172900 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-172900 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-172900 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fd7def32-422f-4829-aaae-a58bb5fbbae7] Pending
helpers_test.go:344: "task-pv-pod-restore" [fd7def32-422f-4829-aaae-a58bb5fbbae7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fd7def32-422f-4829-aaae-a58bb5fbbae7] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0223328s
addons_test.go:628: (dbg) Run:  kubectl --context addons-172900 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-172900 delete pod task-pv-pod-restore: (1.8095986s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-172900 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-172900 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-172900 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-172900 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.3804121s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-172900 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-172900 addons disable volumesnapshots --alsologtostderr -v=1: (2.4796869s)
--- PASS: TestAddons/parallel/CSI (88.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (34.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-172900 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-172900 --alsologtostderr -v=1: (3.7105065s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-6wspj" [dc3ad069-195b-4353-9a0d-f3c204ccd56a] Pending
helpers_test.go:344: "headlamp-7867546754-6wspj" [dc3ad069-195b-4353-9a0d-f3c204ccd56a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-6wspj" [dc3ad069-195b-4353-9a0d-f3c204ccd56a] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 31.0183963s
--- PASS: TestAddons/parallel/Headlamp (34.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-dch54" [99d936e0-3488-40a1-9a85-52a4bd7be082] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.031064s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-172900
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-172900: (2.8689412s)
--- PASS: TestAddons/parallel/CloudSpanner (7.91s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.24s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-172900 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-172900 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6c77872f-e40d-4136-ab2e-46eb91772d73] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6c77872f-e40d-4136-ab2e-46eb91772d73] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6c77872f-e40d-4136-ab2e-46eb91772d73] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 22.0142105s
addons_test.go:992: (dbg) Run:  kubectl --context addons-172900 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-172900 ssh "cat /opt/local-path-provisioner/pvc-94473844-0d72-4f94-aad5-dba8ec7bd09b_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-172900 ssh "cat /opt/local-path-provisioner/pvc-94473844-0d72-4f94-aad5-dba8ec7bd09b_default_test-pvc/file1": (1.1432286s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-172900 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-172900 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-172900 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (60.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dpkdp" [69a41730-9386-43ff-b248-f5a047232b42] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0126887s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-172900
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-172900: (2.4857367s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-zs2hc" [3d567de1-ee67-40e9-ae47-9852d605f61f] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0531266s
--- PASS: TestAddons/parallel/Yakd (6.06s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (58.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 57.1369ms
addons_test.go:897: volcano-admission stabilized in 57.625ms
addons_test.go:889: volcano-scheduler stabilized in 58.3115ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-b67sm" [1c4c1a5f-8890-44de-aab8-7af88f962edc] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.0128419s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-7zr89" [b4ae2b79-1524-428a-a0c8-360bf499fe1f] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.0803392s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-mpbpt" [d102f8ec-e85c-413b-ae87-c874b7cea75a] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0121361s
addons_test.go:924: (dbg) Run:  kubectl --context addons-172900 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-172900 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-172900 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a2142184-2273-46b3-9cbe-af6b031d47d2] Pending
helpers_test.go:344: "test-job-nginx-0" [a2142184-2273-46b3-9cbe-af6b031d47d2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a2142184-2273-46b3-9cbe-af6b031d47d2] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 27.0783118s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-172900 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-172900 addons disable volcano --alsologtostderr -v=1: (15.0723486s)
--- PASS: TestAddons/parallel/Volcano (58.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-172900 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-172900 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (15.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-172900
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-172900: (13.3615684s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-172900
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-172900
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-172900
--- PASS: TestAddons/StoppedEnableDisable (15.17s)

                                                
                                    
x
+
TestCertOptions (96.94s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-974100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-974100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m26.7006973s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-974100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-974100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.3626431s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-974100 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-974100 -- "sudo cat /etc/kubernetes/admin.conf": (1.3626692s)
helpers_test.go:175: Cleaning up "cert-options-974100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-974100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-974100: (7.3006649s)
--- PASS: TestCertOptions (96.94s)

                                                
                                    
x
+
TestCertExpiration (367.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-237800 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-237800 --memory=2048 --cert-expiration=3m --driver=docker: (1m47.4321466s)
E0719 04:56:00.318118   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-237800 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-237800 --memory=2048 --cert-expiration=8760h --driver=docker: (1m12.2114076s)
helpers_test.go:175: Cleaning up "cert-expiration-237800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-237800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-237800: (7.7966702s)
--- PASS: TestCertExpiration (367.44s)

                                                
                                    
x
+
TestDockerFlags (121.33s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-315100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-315100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m52.3171207s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-315100 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-315100 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.4988583s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-315100 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-315100 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.4017267s)
helpers_test.go:175: Cleaning up "docker-flags-315100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-315100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-315100: (6.1158157s)
--- PASS: TestDockerFlags (121.33s)

                                                
                                    
x
+
TestForceSystemdFlag (176.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-011300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
E0719 04:51:00.315854   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-011300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m47.1716097s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-011300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-011300 ssh "docker info --format {{.CgroupDriver}}": (1.5933543s)
helpers_test.go:175: Cleaning up "force-systemd-flag-011300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-011300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-011300: (7.5618961s)
--- PASS: TestForceSystemdFlag (176.33s)

                                                
                                    
x
+
TestForceSystemdEnv (185.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-197200 --memory=2048 --alsologtostderr -v=5 --driver=docker
E0719 04:52:39.647230   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-197200 --memory=2048 --alsologtostderr -v=5 --driver=docker: (2m58.1465799s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-197200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-197200 ssh "docker info --format {{.CgroupDriver}}": (1.5498491s)
helpers_test.go:175: Cleaning up "force-systemd-env-197200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-197200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-197200: (6.2649087s)
--- PASS: TestForceSystemdEnv (185.96s)

                                                
                                    
x
+
TestErrorSpam/start (4.08s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 start --dry-run: (1.3607944s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 start --dry-run: (1.3487859s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 start --dry-run: (1.3651652s)
--- PASS: TestErrorSpam/start (4.08s)

                                                
                                    
x
+
TestErrorSpam/status (4.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 status: (1.3393706s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 status: (1.3132056s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 status: (1.3732153s)
--- PASS: TestErrorSpam/status (4.03s)

                                                
                                    
x
+
TestErrorSpam/pause (4.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 pause: (1.6544712s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 pause: (1.244355s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 pause: (1.2364892s)
--- PASS: TestErrorSpam/pause (4.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 unpause: (1.9791297s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 unpause: (1.5106127s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 unpause: (1.9590873s)
--- PASS: TestErrorSpam/unpause (5.45s)

                                                
                                    
x
+
TestErrorSpam/stop (20.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 stop: (12.6651972s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 stop: (4.0724114s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-755000 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-755000 stop: (3.9217079s)
--- PASS: TestErrorSpam/stop (20.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10972\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-365100 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0719 03:42:39.617243   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:39.631879   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:39.647630   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:39.679307   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:39.725431   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:39.819232   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:39.993033   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:40.320713   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:40.967020   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:42.258001   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:44.821276   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:42:49.954117   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:43:00.199045   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:43:20.681019   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-365100 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m26.478832s)
--- PASS: TestFunctional/serial/StartWithProxy (86.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (48.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-365100 --alsologtostderr -v=8
E0719 03:44:01.652619   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-365100 --alsologtostderr -v=8: (48.2872717s)
functional_test.go:659: soft start took 48.2886871s for "functional-365100" cluster.
--- PASS: TestFunctional/serial/SoftStart (48.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-365100 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 cache add registry.k8s.io/pause:3.1: (2.84131s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 cache add registry.k8s.io/pause:3.3: (2.4045542s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 cache add registry.k8s.io/pause:latest: (2.4080796s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-365100 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1760889675\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-365100 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1760889675\001: (2.2999428s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cache add minikube-local-cache-test:functional-365100
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 cache add minikube-local-cache-test:functional-365100: (1.7498551s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cache delete minikube-local-cache-test:functional-365100
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-365100
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh sudo crictl images: (1.2441168s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh sudo docker rmi registry.k8s.io/pause:latest: (1.251633s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-365100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1.2222093s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:44:38.988702    9400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 cache reload: (1.9963528s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (1.2255493s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (5.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 kubectl -- --context functional-365100 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-365100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0719 03:45:23.587200   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-365100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.3526766s)
functional_test.go:757: restart took 56.354178s for "functional-365100" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (56.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-365100 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 logs: (2.8488056s)
--- PASS: TestFunctional/serial/LogsCmd (2.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd734839176\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd734839176\001\logs.txt: (3.0457083s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.05s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-365100 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-365100
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-365100: exit status 115 (1.5318225s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31440 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:45:57.357625    8240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_service_5a553248039ac2ab6beea740c8d8ce1b809666c7_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-365100 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-365100 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-365100 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.1572045s)

                                                
                                                
-- stdout --
	* [functional-365100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:46:46.246240    6360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 03:46:46.326696    6360 out.go:291] Setting OutFile to fd 832 ...
	I0719 03:46:46.326965    6360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:46:46.326965    6360 out.go:304] Setting ErrFile to fd 764...
	I0719 03:46:46.326965    6360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:46:46.349684    6360 out.go:298] Setting JSON to false
	I0719 03:46:46.355729    6360 start.go:129] hostinfo: {"hostname":"minikube3","uptime":177791,"bootTime":1721183014,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 03:46:46.355729    6360 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:46:46.357885    6360 out.go:177] * [functional-365100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:46:46.364598    6360 notify.go:220] Checking for updates...
	I0719 03:46:46.367526    6360 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:46:46.378241    6360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:46:46.380756    6360 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 03:46:46.383842    6360 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:46:46.386343    6360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:46:46.392291    6360 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:46:46.393451    6360 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:46:46.697607    6360 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 03:46:46.714895    6360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:46:47.166054    6360 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:87 SystemTime:2024-07-19 03:46:47.104856093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:46:47.176355    6360 out.go:177] * Using the docker driver based on existing profile
	I0719 03:46:47.180532    6360 start.go:297] selected driver: docker
	I0719 03:46:47.180722    6360 start.go:901] validating driver "docker" against &{Name:functional-365100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-365100 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:46:47.180949    6360 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:46:47.247745    6360 out.go:177] 
	W0719 03:46:47.251628    6360 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 03:46:47.258657    6360 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-365100 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:987: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-365100 --dry-run --alsologtostderr -v=1 --driver=docker: (2.7919243s)
--- PASS: TestFunctional/parallel/DryRun (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-365100 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-365100 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.3573319s)

                                                
                                                
-- stdout --
	* [functional-365100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:46:50.208047    3040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 03:46:50.288109    3040 out.go:291] Setting OutFile to fd 500 ...
	I0719 03:46:50.289095    3040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:46:50.289095    3040 out.go:304] Setting ErrFile to fd 880...
	I0719 03:46:50.289686    3040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:46:50.314652    3040 out.go:298] Setting JSON to false
	I0719 03:46:50.317765    3040 start.go:129] hostinfo: {"hostname":"minikube3","uptime":177795,"bootTime":1721183014,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0719 03:46:50.317765    3040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:46:50.329890    3040 out.go:177] * [functional-365100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:46:50.372179    3040 notify.go:220] Checking for updates...
	I0719 03:46:50.389741    3040 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0719 03:46:50.437952    3040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:46:50.465839    3040 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0719 03:46:50.481736    3040 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:46:50.528824    3040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:46:50.538462    3040 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:46:50.539804    3040 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:46:50.847768    3040 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0719 03:46:50.859708    3040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 03:46:51.327383    3040 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:87 SystemTime:2024-07-19 03:46:51.265096992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0719 03:46:51.330651    3040 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0719 03:46:51.334890    3040 start.go:297] selected driver: docker
	I0719 03:46:51.334890    3040 start.go:901] validating driver "docker" against &{Name:functional-365100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-365100 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:46:51.334890    3040 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:46:51.401538    3040 out.go:177] 
	W0719 03:46:51.404832    3040 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 03:46:51.409469    3040 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 status: (1.8405836s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.5575383s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 status -o json: (1.5632683s)
--- PASS: TestFunctional/parallel/StatusCmd (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (100.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5c4071c6-3023-4a1d-8122-5403d1141087] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0262695s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-365100 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-365100 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-365100 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-365100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c5a576a4-c87b-48e8-96b3-c2eaa0ed5365] Pending
helpers_test.go:344: "sp-pod" [c5a576a4-c87b-48e8-96b3-c2eaa0ed5365] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c5a576a4-c87b-48e8-96b3-c2eaa0ed5365] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 37.0229347s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-365100 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-365100 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-365100 delete -f testdata/storage-provisioner/pod.yaml: (2.3117781s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-365100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1520d335-c7af-4407-81aa-ad8407d3c386] Pending
helpers_test.go:344: "sp-pod" [1520d335-c7af-4407-81aa-ad8407d3c386] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1520d335-c7af-4407-81aa-ad8407d3c386] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 53.0184098s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-365100 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (100.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "echo hello": (1.6496677s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "cat /etc/hostname": (1.4435025s)
--- PASS: TestFunctional/parallel/SSHCmd (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (10.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.4495518s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh -n functional-365100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh -n functional-365100 "sudo cat /home/docker/cp-test.txt": (1.7202353s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cp functional-365100:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd636726437\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 cp functional-365100:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd636726437\001\cp-test.txt: (1.86022s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh -n functional-365100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh -n functional-365100 "sudo cat /home/docker/cp-test.txt": (2.0423848s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (1.5104949s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh -n functional-365100 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh -n functional-365100 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.8595552s)
--- PASS: TestFunctional/parallel/CpCmd (10.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (76.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-365100 replace --force -f testdata\mysql.yaml
functional_test.go:1789: (dbg) Done: kubectl --context functional-365100 replace --force -f testdata\mysql.yaml: (1.1141511s)
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-h2682" [24041e28-ce6d-4451-9367-92a39e125e33] Pending
helpers_test.go:344: "mysql-64454c8b5c-h2682" [24041e28-ce6d-4451-9367-92a39e125e33] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-h2682" [24041e28-ce6d-4451-9367-92a39e125e33] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m4.0147947s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;": exit status 1 (346.4365ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;": exit status 1 (286.9566ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;": exit status 1 (355.7614ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;": exit status 1 (332.9118ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0719 03:48:07.441677   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
functional_test.go:1803: (dbg) Run:  kubectl --context functional-365100 exec mysql-64454c8b5c-h2682 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (76.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10972/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/test/nested/copy/10972/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/test/nested/copy/10972/hosts": (1.3384716s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (10.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10972.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/ssl/certs/10972.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/ssl/certs/10972.pem": (2.054509s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10972.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /usr/share/ca-certificates/10972.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /usr/share/ca-certificates/10972.pem": (1.7193923s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.7005253s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/109722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/ssl/certs/109722.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/ssl/certs/109722.pem": (1.6024463s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/109722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /usr/share/ca-certificates/109722.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /usr/share/ca-certificates/109722.pem": (1.6773862s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (2.0125717s)
--- PASS: TestFunctional/parallel/CertSync (10.79s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-365100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-365100 ssh "sudo systemctl is-active crio": exit status 1 (1.917459s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:46:01.906382    9084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/License (5.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (5.1376484s)
--- PASS: TestFunctional/parallel/License (5.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (26.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-365100 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-365100 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-668vg" [af67123f-615f-419b-9014-6d30bfcac364] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-668vg" [af67123f-615f-419b-9014-6d30bfcac364] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 26.0272575s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (26.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 version -o=json --components: (2.1295216s)
--- PASS: TestFunctional/parallel/Version/components (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-365100 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-365100
docker.io/kicbase/echo-server:functional-365100
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-365100 image ls --format short --alsologtostderr:
W0719 03:47:32.219602   13700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0719 03:47:32.308927   13700 out.go:291] Setting OutFile to fd 860 ...
I0719 03:47:32.309786   13700 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:32.309786   13700 out.go:304] Setting ErrFile to fd 960...
I0719 03:47:32.309786   13700 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:32.329355   13700 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:32.329355   13700 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:32.356882   13700 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
I0719 03:47:32.559899   13700 ssh_runner.go:195] Run: systemctl --version
I0719 03:47:32.571587   13700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
I0719 03:47:32.780793   13700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
I0719 03:47:32.916880   13700 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-365100 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-365100 | d1dac9d27d3c0 | 1.24MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kicbase/echo-server               | functional-365100 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-365100 | 296f2bcee3774 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-365100 image ls --format table --alsologtostderr:
W0719 03:47:44.894613    3548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0719 03:47:44.996712    3548 out.go:291] Setting OutFile to fd 636 ...
I0719 03:47:45.011135    3548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:45.011135    3548 out.go:304] Setting ErrFile to fd 664...
I0719 03:47:45.011135    3548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:45.028658    3548 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:45.028658    3548 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:45.041511    3548 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
I0719 03:47:45.259556    3548 ssh_runner.go:195] Run: systemctl --version
I0719 03:47:45.273891    3548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
I0719 03:47:45.476396    3548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
I0719 03:47:45.608223    3548 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-365100 image ls --format json --alsologtostderr:
[{"id":"296f2bcee37745f7453b0add04d0503ffca23690a24b1bbb809c5c7eacab359b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-365100"],"size":"30"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-365100"],"size":"4940000"},{"id":"d1dac9d27d3c0640b6
065c93f3298e77d31d40b777dddd03c3d9259a738463c8","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-365100"],"size":"1240000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"
repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-365100 image ls --format json --alsologtostderr:
W0719 03:47:43.966724   12028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0719 03:47:44.054771   12028 out.go:291] Setting OutFile to fd 860 ...
I0719 03:47:44.056000   12028 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:44.056037   12028 out.go:304] Setting ErrFile to fd 796...
I0719 03:47:44.056088   12028 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:44.074824   12028 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:44.074824   12028 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:44.101071   12028 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
I0719 03:47:44.326079   12028 ssh_runner.go:195] Run: systemctl --version
I0719 03:47:44.340474   12028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
I0719 03:47:44.540798   12028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
I0719 03:47:44.696269   12028 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-365100 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 296f2bcee37745f7453b0add04d0503ffca23690a24b1bbb809c5c7eacab359b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-365100
size: "30"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-365100
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-365100 image ls --format yaml --alsologtostderr:
W0719 03:47:33.139072   14412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0719 03:47:33.235662   14412 out.go:291] Setting OutFile to fd 944 ...
I0719 03:47:33.236231   14412 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:33.236231   14412 out.go:304] Setting ErrFile to fd 784...
I0719 03:47:33.236231   14412 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:33.259751   14412 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:33.260374   14412 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:33.285072   14412 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
I0719 03:47:33.531548   14412 ssh_runner.go:195] Run: systemctl --version
I0719 03:47:33.542170   14412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
I0719 03:47:33.745195   14412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
I0719 03:47:33.886358   14412 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-365100 ssh pgrep buildkitd: exit status 1 (1.269728s)

                                                
                                                
** stderr ** 
	W0719 03:47:34.099463   10920 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image build -t localhost/my-image:functional-365100 testdata\build --alsologtostderr
E0719 03:47:39.615218   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image build -t localhost/my-image:functional-365100 testdata\build --alsologtostderr: (7.7134508s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-365100 image build -t localhost/my-image:functional-365100 testdata\build --alsologtostderr:
W0719 03:47:35.357747   14612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0719 03:47:35.456008   14612 out.go:291] Setting OutFile to fd 832 ...
I0719 03:47:35.477210   14612 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:35.477210   14612 out.go:304] Setting ErrFile to fd 792...
I0719 03:47:35.477210   14612 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 03:47:35.494438   14612 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:35.512744   14612 config.go:182] Loaded profile config "functional-365100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 03:47:35.537931   14612 cli_runner.go:164] Run: docker container inspect functional-365100 --format={{.State.Status}}
I0719 03:47:35.760245   14612 ssh_runner.go:195] Run: systemctl --version
I0719 03:47:35.769627   14612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365100
I0719 03:47:35.969278   14612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63170 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-365100\id_rsa Username:docker}
I0719 03:47:36.099245   14612 build_images.go:161] Building image from path: C:\Users\jenkins.minikube3\AppData\Local\Temp\build.579109241.tar
I0719 03:47:36.114029   14612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 03:47:36.152936   14612 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.579109241.tar
I0719 03:47:36.166590   14612 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.579109241.tar: stat -c "%s %y" /var/lib/minikube/build/build.579109241.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.579109241.tar': No such file or directory
I0719 03:47:36.166645   14612 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\AppData\Local\Temp\build.579109241.tar --> /var/lib/minikube/build/build.579109241.tar (3072 bytes)
I0719 03:47:36.228858   14612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.579109241
I0719 03:47:36.267834   14612 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.579109241 -xf /var/lib/minikube/build/build.579109241.tar
I0719 03:47:36.299929   14612 docker.go:360] Building image: /var/lib/minikube/build/build.579109241
I0719 03:47:36.310833   14612 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-365100 /var/lib/minikube/build/build.579109241
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 3.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 writing image sha256:d1dac9d27d3c0640b6065c93f3298e77d31d40b777dddd03c3d9259a738463c8 done
#8 naming to localhost/my-image:functional-365100 0.0s done
#8 DONE 0.2s
I0719 03:47:42.858226   14612 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-365100 /var/lib/minikube/build/build.579109241: (6.5472213s)
I0719 03:47:42.871424   14612 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.579109241
I0719 03:47:42.907105   14612 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.579109241.tar
I0719 03:47:42.929397   14612 build_images.go:217] Built localhost/my-image:functional-365100 from C:\Users\jenkins.minikube3\AppData\Local\Temp\build.579109241.tar
I0719 03:47:42.929397   14612 build_images.go:133] succeeded building to: functional-365100
I0719 03:47:42.929397   14612 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (2.8254842s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-365100
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image load --daemon docker.io/kicbase/echo-server:functional-365100 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image load --daemon docker.io/kicbase/echo-server:functional-365100 --alsologtostderr: (3.8431345s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image ls: (1.1701725s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-365100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-365100"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-365100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-365100": (7.6407618s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-365100 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-365100 docker-env | Invoke-Expression ; docker images": (4.4731195s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image load --daemon docker.io/kicbase/echo-server:functional-365100 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image load --daemon docker.io/kicbase/echo-server:functional-365100 --alsologtostderr: (2.4091203s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image ls: (1.7136044s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:234: (dbg) Done: docker pull docker.io/kicbase/echo-server:latest: (1.1132386s)
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-365100
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image load --daemon docker.io/kicbase/echo-server:functional-365100 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image load --daemon docker.io/kicbase/echo-server:functional-365100 --alsologtostderr: (1.9659052s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image ls: (1.2114859s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image save docker.io/kicbase/echo-server:functional-365100 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image save docker.io/kicbase/echo-server:functional-365100 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (2.1278629s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image rm docker.io/kicbase/echo-server:functional-365100 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image rm docker.io/kicbase/echo-server:functional-365100 --alsologtostderr: (1.2065094s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image ls: (1.0285607s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (3.3602091s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image ls: (1.9756843s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-365100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-365100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-365100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8636: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 14344: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-365100 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 service list: (2.6482788s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-365100 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (36.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-365100 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Done: kubectl --context functional-365100 apply -f testdata\testsvc.yaml: (1.1692529s)
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [26428362-2359-47d2-9493-0e7336be8fd5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [26428362-2359-47d2-9493-0e7336be8fd5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 35.0631785s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (36.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 service list -o json: (2.976298s)
functional_test.go:1490: Took "2.976359s" to run "out/minikube-windows-amd64.exe -p functional-365100 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-365100
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 image save --daemon docker.io/kicbase/echo-server:functional-365100 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-365100 image save --daemon docker.io/kicbase/echo-server:functional-365100 --alsologtostderr: (2.8981309s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-365100
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-365100 service --namespace=default --https --url hello-node: exit status 1 (15.024413s)

                                                
                                                
-- stdout --
	https://127.0.0.1:63457

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:46:32.059787    2604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:63457
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.6551052s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.6683326s)
functional_test.go:1311: Took "1.6683326s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "270.5016ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.8326389s)
functional_test.go:1362: Took "1.8329615s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "252.1149ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-365100 service hello-node --url --format={{.IP}}: exit status 1 (15.0340768s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:46:47.093538   14952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-365100 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-365100 service hello-node --url: exit status 1 (15.0193519s)

                                                
                                                
-- stdout --
	http://127.0.0.1:63548

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:47:02.151073    7128 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:63548
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-365100 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-365100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1640: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 8752: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.39s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-365100
--- PASS: TestFunctional/delete_echo-server_images (0.39s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-365100
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-365100
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (264.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-055700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
E0719 03:52:39.620237   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 03:56:00.288677   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:00.296980   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:00.312099   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:00.345176   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:00.386695   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:00.473533   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:00.643420   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:00.982622   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:01.634177   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:02.921057   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:05.494045   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:10.629261   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:56:20.883214   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-055700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (4m20.5540362s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr: (3.8593263s)
--- PASS: TestMultiControlPlane/serial/StartCluster (264.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (27.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- rollout status deployment/busybox
E0719 03:56:41.365178   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-055700 -- rollout status deployment/busybox: (17.2470514s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-75mf7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-75mf7 -- nslookup kubernetes.io: (1.9297189s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-c94gj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-c94gj -- nslookup kubernetes.io: (1.5946418s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-k8bjq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-k8bjq -- nslookup kubernetes.io: (1.5296874s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-75mf7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-c94gj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-k8bjq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-75mf7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-c94gj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-k8bjq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (27.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-75mf7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-75mf7 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-c94gj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-c94gj -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-k8bjq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-055700 -- exec busybox-fc5497c4f-k8bjq -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (74.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-055700 -v=7 --alsologtostderr
E0719 03:57:22.338213   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 03:57:39.623022   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-055700 -v=7 --alsologtostderr: (1m9.1301258s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr: (5.2230044s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (74.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-055700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (3.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.8186696s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (3.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (77.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 status --output json -v=7 --alsologtostderr: (4.5640485s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp testdata\cp-test.txt ha-055700:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp testdata\cp-test.txt ha-055700:/home/docker/cp-test.txt: (1.3957757s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt": (1.2369366s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1052740900\001\cp-test_ha-055700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1052740900\001\cp-test_ha-055700.txt: (1.2571377s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt": (1.2410076s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700:/home/docker/cp-test.txt ha-055700-m02:/home/docker/cp-test_ha-055700_ha-055700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700:/home/docker/cp-test.txt ha-055700-m02:/home/docker/cp-test_ha-055700_ha-055700-m02.txt: (1.8955396s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt": (1.2603216s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test_ha-055700_ha-055700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test_ha-055700_ha-055700-m02.txt": (1.270349s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700:/home/docker/cp-test.txt ha-055700-m03:/home/docker/cp-test_ha-055700_ha-055700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700:/home/docker/cp-test.txt ha-055700-m03:/home/docker/cp-test_ha-055700_ha-055700-m03.txt: (1.8023689s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt"
E0719 03:58:44.267393   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt": (1.2426312s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test_ha-055700_ha-055700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test_ha-055700_ha-055700-m03.txt": (1.2558386s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700:/home/docker/cp-test.txt ha-055700-m04:/home/docker/cp-test_ha-055700_ha-055700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700:/home/docker/cp-test.txt ha-055700-m04:/home/docker/cp-test_ha-055700_ha-055700-m04.txt: (1.9297091s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test.txt": (1.3017702s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test_ha-055700_ha-055700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test_ha-055700_ha-055700-m04.txt": (1.325933s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp testdata\cp-test.txt ha-055700-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp testdata\cp-test.txt ha-055700-m02:/home/docker/cp-test.txt: (1.2790608s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt": (1.2680193s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1052740900\001\cp-test_ha-055700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1052740900\001\cp-test_ha-055700-m02.txt: (1.3201902s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt": (1.2705999s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m02:/home/docker/cp-test.txt ha-055700:/home/docker/cp-test_ha-055700-m02_ha-055700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m02:/home/docker/cp-test.txt ha-055700:/home/docker/cp-test_ha-055700-m02_ha-055700.txt: (1.8108997s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt": (1.2639955s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test_ha-055700-m02_ha-055700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test_ha-055700-m02_ha-055700.txt": (1.2595999s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m02:/home/docker/cp-test.txt ha-055700-m03:/home/docker/cp-test_ha-055700-m02_ha-055700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m02:/home/docker/cp-test.txt ha-055700-m03:/home/docker/cp-test_ha-055700-m02_ha-055700-m03.txt: (1.8354932s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt"
E0719 03:59:02.816865   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt": (1.216671s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test_ha-055700-m02_ha-055700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test_ha-055700-m02_ha-055700-m03.txt": (1.2636054s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m02:/home/docker/cp-test.txt ha-055700-m04:/home/docker/cp-test_ha-055700-m02_ha-055700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m02:/home/docker/cp-test.txt ha-055700-m04:/home/docker/cp-test_ha-055700-m02_ha-055700-m04.txt: (1.8491476s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test.txt": (1.2786922s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test_ha-055700-m02_ha-055700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test_ha-055700-m02_ha-055700-m04.txt": (1.2537049s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp testdata\cp-test.txt ha-055700-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp testdata\cp-test.txt ha-055700-m03:/home/docker/cp-test.txt: (1.2883643s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt": (1.2514303s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1052740900\001\cp-test_ha-055700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1052740900\001\cp-test_ha-055700-m03.txt: (1.2885879s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt": (1.2547226s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m03:/home/docker/cp-test.txt ha-055700:/home/docker/cp-test_ha-055700-m03_ha-055700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m03:/home/docker/cp-test.txt ha-055700:/home/docker/cp-test_ha-055700-m03_ha-055700.txt: (1.8621097s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt": (1.2959811s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test_ha-055700-m03_ha-055700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test_ha-055700-m03_ha-055700.txt": (1.2607478s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m03:/home/docker/cp-test.txt ha-055700-m02:/home/docker/cp-test_ha-055700-m03_ha-055700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m03:/home/docker/cp-test.txt ha-055700-m02:/home/docker/cp-test_ha-055700-m03_ha-055700-m02.txt: (1.92247s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt": (1.2813635s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test_ha-055700-m03_ha-055700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test_ha-055700-m03_ha-055700-m02.txt": (1.2767012s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m03:/home/docker/cp-test.txt ha-055700-m04:/home/docker/cp-test_ha-055700-m03_ha-055700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m03:/home/docker/cp-test.txt ha-055700-m04:/home/docker/cp-test_ha-055700-m03_ha-055700-m04.txt: (1.8586176s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test.txt": (1.2700178s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test_ha-055700-m03_ha-055700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test_ha-055700-m03_ha-055700-m04.txt": (1.2914454s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp testdata\cp-test.txt ha-055700-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp testdata\cp-test.txt ha-055700-m04:/home/docker/cp-test.txt: (1.3044688s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt": (1.2862672s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1052740900\001\cp-test_ha-055700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1052740900\001\cp-test_ha-055700-m04.txt: (1.3337364s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt": (1.3248291s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m04:/home/docker/cp-test.txt ha-055700:/home/docker/cp-test_ha-055700-m04_ha-055700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m04:/home/docker/cp-test.txt ha-055700:/home/docker/cp-test_ha-055700-m04_ha-055700.txt: (1.8558695s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt": (1.2496152s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test_ha-055700-m04_ha-055700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700 "sudo cat /home/docker/cp-test_ha-055700-m04_ha-055700.txt": (1.2349277s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m04:/home/docker/cp-test.txt ha-055700-m02:/home/docker/cp-test_ha-055700-m04_ha-055700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m04:/home/docker/cp-test.txt ha-055700-m02:/home/docker/cp-test_ha-055700-m04_ha-055700-m02.txt: (1.7858507s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt": (1.1974937s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test_ha-055700-m04_ha-055700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m02 "sudo cat /home/docker/cp-test_ha-055700-m04_ha-055700-m02.txt": (1.240003s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m04:/home/docker/cp-test.txt ha-055700-m03:/home/docker/cp-test_ha-055700-m04_ha-055700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 cp ha-055700-m04:/home/docker/cp-test.txt ha-055700-m03:/home/docker/cp-test_ha-055700-m04_ha-055700-m03.txt: (1.7819513s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m04 "sudo cat /home/docker/cp-test.txt": (1.2633289s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test_ha-055700-m04_ha-055700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 ssh -n ha-055700-m03 "sudo cat /home/docker/cp-test_ha-055700-m04_ha-055700-m03.txt": (1.2310262s)
--- PASS: TestMultiControlPlane/serial/CopyFile (77.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (15.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 node stop m02 -v=7 --alsologtostderr: (12.0969129s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr: exit status 7 (3.488453s)

                                                
                                                
-- stdout --
	ha-055700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055700-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055700-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055700-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:59:57.447307    8448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 03:59:57.534152    8448 out.go:291] Setting OutFile to fd 1004 ...
	I0719 03:59:57.534152    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:59:57.534152    8448 out.go:304] Setting ErrFile to fd 732...
	I0719 03:59:57.534152    8448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:59:57.552100    8448 out.go:298] Setting JSON to false
	I0719 03:59:57.552100    8448 mustload.go:65] Loading cluster: ha-055700
	I0719 03:59:57.552100    8448 notify.go:220] Checking for updates...
	I0719 03:59:57.553179    8448 config.go:182] Loaded profile config "ha-055700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:59:57.553179    8448 status.go:255] checking status of ha-055700 ...
	I0719 03:59:57.574087    8448 cli_runner.go:164] Run: docker container inspect ha-055700 --format={{.State.Status}}
	I0719 03:59:57.761001    8448 status.go:330] ha-055700 host status = "Running" (err=<nil>)
	I0719 03:59:57.761001    8448 host.go:66] Checking if "ha-055700" exists ...
	I0719 03:59:57.772081    8448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055700
	I0719 03:59:57.949753    8448 host.go:66] Checking if "ha-055700" exists ...
	I0719 03:59:57.964592    8448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:59:57.974744    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055700
	I0719 03:59:58.151077    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63617 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-055700\id_rsa Username:docker}
	I0719 03:59:58.282574    8448 ssh_runner.go:195] Run: systemctl --version
	I0719 03:59:58.320914    8448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:59:58.362446    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-055700
	I0719 03:59:58.544764    8448 kubeconfig.go:125] found "ha-055700" server: "https://127.0.0.1:63621"
	I0719 03:59:58.544863    8448 api_server.go:166] Checking apiserver status ...
	I0719 03:59:58.558847    8448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:59:58.604158    8448 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2626/cgroup
	I0719 03:59:58.628913    8448 api_server.go:182] apiserver freezer: "7:freezer:/docker/f002a5630cb54ac2ee62647d910ebe825b34302129153766c09be7879b33f215/kubepods/burstable/pode7b2b3a361f419e7367ac8ea0f8b3529/742f0bb76bb473ced1ec7444e6666662a9de8f880c30f30d50afa83a82b957a8"
	I0719 03:59:58.641943    8448 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f002a5630cb54ac2ee62647d910ebe825b34302129153766c09be7879b33f215/kubepods/burstable/pode7b2b3a361f419e7367ac8ea0f8b3529/742f0bb76bb473ced1ec7444e6666662a9de8f880c30f30d50afa83a82b957a8/freezer.state
	I0719 03:59:58.660665    8448 api_server.go:204] freezer state: "THAWED"
	I0719 03:59:58.660665    8448 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63621/healthz ...
	I0719 03:59:58.671677    8448 api_server.go:279] https://127.0.0.1:63621/healthz returned 200:
	ok
	I0719 03:59:58.672682    8448 status.go:422] ha-055700 apiserver status = Running (err=<nil>)
	I0719 03:59:58.672682    8448 status.go:257] ha-055700 status: &{Name:ha-055700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:59:58.672682    8448 status.go:255] checking status of ha-055700-m02 ...
	I0719 03:59:58.695689    8448 cli_runner.go:164] Run: docker container inspect ha-055700-m02 --format={{.State.Status}}
	I0719 03:59:58.888230    8448 status.go:330] ha-055700-m02 host status = "Stopped" (err=<nil>)
	I0719 03:59:58.888230    8448 status.go:343] host is not running, skipping remaining checks
	I0719 03:59:58.888230    8448 status.go:257] ha-055700-m02 status: &{Name:ha-055700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 03:59:58.888230    8448 status.go:255] checking status of ha-055700-m03 ...
	I0719 03:59:58.910024    8448 cli_runner.go:164] Run: docker container inspect ha-055700-m03 --format={{.State.Status}}
	I0719 03:59:59.107110    8448 status.go:330] ha-055700-m03 host status = "Running" (err=<nil>)
	I0719 03:59:59.107110    8448 host.go:66] Checking if "ha-055700-m03" exists ...
	I0719 03:59:59.119005    8448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055700-m03
	I0719 03:59:59.293313    8448 host.go:66] Checking if "ha-055700-m03" exists ...
	I0719 03:59:59.307911    8448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 03:59:59.316173    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055700-m03
	I0719 03:59:59.495922    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63750 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-055700-m03\id_rsa Username:docker}
	I0719 03:59:59.627735    8448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:59:59.672782    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-055700
	I0719 03:59:59.855061    8448 kubeconfig.go:125] found "ha-055700" server: "https://127.0.0.1:63621"
	I0719 03:59:59.855061    8448 api_server.go:166] Checking apiserver status ...
	I0719 03:59:59.870208    8448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:59:59.918049    8448 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2403/cgroup
	I0719 03:59:59.937060    8448 api_server.go:182] apiserver freezer: "7:freezer:/docker/9f326465720f9b05572ed44ce3fbaa82e30abaabf2fe357487b8e51ad66910f4/kubepods/burstable/podeafb4f87d691b2cc1f303a09e1635366/4302db1234ebc48f95372563b9e3cf00194198af683d91db990366a421e6a2e3"
	I0719 03:59:59.957068    8448 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9f326465720f9b05572ed44ce3fbaa82e30abaabf2fe357487b8e51ad66910f4/kubepods/burstable/podeafb4f87d691b2cc1f303a09e1635366/4302db1234ebc48f95372563b9e3cf00194198af683d91db990366a421e6a2e3/freezer.state
	I0719 03:59:59.997780    8448 api_server.go:204] freezer state: "THAWED"
	I0719 03:59:59.997780    8448 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63621/healthz ...
	I0719 04:00:00.016517    8448 api_server.go:279] https://127.0.0.1:63621/healthz returned 200:
	ok
	I0719 04:00:00.016517    8448 status.go:422] ha-055700-m03 apiserver status = Running (err=<nil>)
	I0719 04:00:00.016517    8448 status.go:257] ha-055700-m03 status: &{Name:ha-055700-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:00:00.017523    8448 status.go:255] checking status of ha-055700-m04 ...
	I0719 04:00:00.037526    8448 cli_runner.go:164] Run: docker container inspect ha-055700-m04 --format={{.State.Status}}
	I0719 04:00:00.227034    8448 status.go:330] ha-055700-m04 host status = "Running" (err=<nil>)
	I0719 04:00:00.227034    8448 host.go:66] Checking if "ha-055700-m04" exists ...
	I0719 04:00:00.242110    8448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055700-m04
	I0719 04:00:00.411954    8448 host.go:66] Checking if "ha-055700-m04" exists ...
	I0719 04:00:00.427563    8448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:00:00.437038    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055700-m04
	I0719 04:00:00.616461    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63887 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-055700-m04\id_rsa Username:docker}
	I0719 04:00:00.751719    8448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:00:00.779639    8448 status.go:257] ha-055700-m04 status: &{Name:ha-055700-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (15.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.766935s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (159.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 node start m02 -v=7 --alsologtostderr
E0719 04:01:00.298105   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 04:01:28.111731   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
ha_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 node start m02 -v=7 --alsologtostderr: (2m34.443585s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr
E0719 04:02:39.617103   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
ha_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr: (4.6294029s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (159.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.7051513s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (275.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-055700 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-055700 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-055700 -v=7 --alsologtostderr: (40.8643039s)
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-055700 --wait=true -v=7 --alsologtostderr
E0719 04:06:00.303073   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
ha_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-055700 --wait=true -v=7 --alsologtostderr: (3m53.9563183s)
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-055700
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (275.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (23.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 node delete m03 -v=7 --alsologtostderr
E0719 04:07:39.622555   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
ha_test.go:487: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 node delete m03 -v=7 --alsologtostderr: (20.2223641s)
ha_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr: (3.2760313s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (23.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.5396916s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 stop -v=7 --alsologtostderr: (37.6449715s)
ha_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr: exit status 7 (813.1957ms)

                                                
                                                
-- stdout --
	ha-055700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055700-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055700-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:08:26.149581    1304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 04:08:26.226872    1304 out.go:291] Setting OutFile to fd 524 ...
	I0719 04:08:26.227272    1304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:08:26.227272    1304 out.go:304] Setting ErrFile to fd 796...
	I0719 04:08:26.227272    1304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:08:26.242726    1304 out.go:298] Setting JSON to false
	I0719 04:08:26.242726    1304 mustload.go:65] Loading cluster: ha-055700
	I0719 04:08:26.242726    1304 notify.go:220] Checking for updates...
	I0719 04:08:26.243573    1304 config.go:182] Loaded profile config "ha-055700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:08:26.243573    1304 status.go:255] checking status of ha-055700 ...
	I0719 04:08:26.264549    1304 cli_runner.go:164] Run: docker container inspect ha-055700 --format={{.State.Status}}
	I0719 04:08:26.436661    1304 status.go:330] ha-055700 host status = "Stopped" (err=<nil>)
	I0719 04:08:26.436661    1304 status.go:343] host is not running, skipping remaining checks
	I0719 04:08:26.436661    1304 status.go:257] ha-055700 status: &{Name:ha-055700 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:08:26.436661    1304 status.go:255] checking status of ha-055700-m02 ...
	I0719 04:08:26.456759    1304 cli_runner.go:164] Run: docker container inspect ha-055700-m02 --format={{.State.Status}}
	I0719 04:08:26.622498    1304 status.go:330] ha-055700-m02 host status = "Stopped" (err=<nil>)
	I0719 04:08:26.622498    1304 status.go:343] host is not running, skipping remaining checks
	I0719 04:08:26.622498    1304 status.go:257] ha-055700-m02 status: &{Name:ha-055700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:08:26.622498    1304 status.go:255] checking status of ha-055700-m04 ...
	I0719 04:08:26.640547    1304 cli_runner.go:164] Run: docker container inspect ha-055700-m04 --format={{.State.Status}}
	I0719 04:08:26.828109    1304 status.go:330] ha-055700-m04 host status = "Stopped" (err=<nil>)
	I0719 04:08:26.828109    1304 status.go:343] host is not running, skipping remaining checks
	I0719 04:08:26.828109    1304 status.go:257] ha-055700-m04 status: &{Name:ha-055700-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (164.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-055700 --wait=true -v=7 --alsologtostderr --driver=docker
E0719 04:11:00.297390   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
ha_test.go:560: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-055700 --wait=true -v=7 --alsologtostderr --driver=docker: (2m40.9875439s)
ha_test.go:566: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr: (3.2544585s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (164.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.5591798s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (91.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-055700 --control-plane -v=7 --alsologtostderr
E0719 04:12:23.487630   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 04:12:39.627813   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
ha_test.go:605: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-055700 --control-plane -v=7 --alsologtostderr: (1m27.3427152s)
ha_test.go:611: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-windows-amd64.exe -p ha-055700 status -v=7 --alsologtostderr: (4.4968284s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (91.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.6121754s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.61s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (77.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-237400 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-237400 --driver=docker: (1m17.9294036s)
--- PASS: TestImageBuild/serial/Setup (77.93s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-237400
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-237400: (4.1357084s)
--- PASS: TestImageBuild/serial/NormalBuild (4.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-237400
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-237400: (2.7206917s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-237400
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-237400: (1.751511s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-237400
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-237400: (2.0404592s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (116.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-273900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0719 04:15:42.832163   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 04:16:00.305106   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-273900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m56.4178137s)
--- PASS: TestJSONOutput/start/Command (116.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-273900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-273900 --output=json --user=testUser: (1.7256134s)
--- PASS: TestJSONOutput/pause/Command (1.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-273900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-273900 --output=json --user=testUser: (1.6170566s)
--- PASS: TestJSONOutput/unpause/Command (1.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-273900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-273900 --output=json --user=testUser: (13.0220052s)
--- PASS: TestJSONOutput/stop/Command (13.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.52s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-936100 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-936100 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (269.518ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3021a851-12dd-47da-b58e-5140166b80f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-936100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8356852e-4ab4-4109-a0e1-73ef368a34b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"b492754e-afe3-4544-a520-2cfd8b8a70c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"43b7ada4-3afa-40fc-80fb-320602ff7246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"8715a0f4-0929-4d0a-9d07-6195ff0a0f13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"ae92d4dd-e9b7-4df4-a482-3cfd1cb26443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"42e9d18b-8a5f-4227-8e54-d2b6ab139b13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:17:01.891856    1184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-936100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-936100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-936100: (1.2451266s)
--- PASS: TestErrorJSONOutput (1.52s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (83.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-437900 --network=
E0719 04:17:39.623532   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-437900 --network=: (1m18.2152465s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-437900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-437900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-437900: (4.8434547s)
--- PASS: TestKicCustomNetwork/create_custom_network (83.28s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (82.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-619000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-619000 --network=bridge: (1m17.3038695s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-619000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-619000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-619000: (4.5984445s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (82.11s)

                                                
                                    
x
+
TestKicExistingNetwork (81.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-412700 --network=existing-network
E0719 04:21:00.311544   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-412700 --network=existing-network: (1m14.9600602s)
helpers_test.go:175: Cleaning up "existing-network-412700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-412700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-412700: (4.6612569s)
--- PASS: TestKicExistingNetwork (81.12s)

                                                
                                    
x
+
TestKicCustomSubnet (82.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-587700 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-587700 --subnet=192.168.60.0/24: (1m17.5110552s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-587700 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-587700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-587700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-587700: (4.9475323s)
--- PASS: TestKicCustomSubnet (82.64s)

                                                
                                    
x
+
TestKicStaticIP (85.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-251700 --static-ip=192.168.200.200
E0719 04:22:39.636929   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-251700 --static-ip=192.168.200.200: (1m18.9372971s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-251700 ip
helpers_test.go:175: Cleaning up "static-ip-251700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-251700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-251700: (5.5078802s)
--- PASS: TestKicStaticIP (85.16s)

                                                
                                    
x
+
TestMainNoArgs (0.24s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.24s)

                                                
                                    
x
+
TestMinikubeProfile (169.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-295100 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-295100 --driver=docker: (1m14.1265314s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-295100 --driver=docker
E0719 04:26:00.313232   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-295100 --driver=docker: (1m13.7207994s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-295100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.7897854s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-295100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (3.4516918s)
helpers_test.go:175: Cleaning up "second-295100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-295100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-295100: (7.6804944s)
helpers_test.go:175: Cleaning up "first-295100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-295100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-295100: (6.4665424s)
--- PASS: TestMinikubeProfile (169.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (33.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-826600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-826600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (32.0670748s)
--- PASS: TestMountStart/serial/StartWithMountFirst (33.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-826600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-826600 ssh -- ls /minikube-host: (1.1581314s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-826600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E0719 04:27:39.631053   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-826600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (30.8064046s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-826600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-826600 ssh -- ls /minikube-host: (1.1937525s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.19s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (4.14s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-826600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-826600 --alsologtostderr -v=5: (4.142879s)
--- PASS: TestMountStart/serial/DeleteFirst (4.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-826600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-826600 ssh -- ls /minikube-host: (1.1622273s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.16s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.6s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-826600
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-826600: (2.6028448s)
--- PASS: TestMountStart/serial/Stop (2.60s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-826600
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-826600: (22.5017797s)
--- PASS: TestMountStart/serial/RestartStopped (23.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-826600 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-826600 ssh -- ls /minikube-host: (1.2199924s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (183.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-292000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0719 04:29:03.511268   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 04:31:00.304496   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-292000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (3m1.041962s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr: (2.5781437s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (183.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- rollout status deployment/busybox: (30.3620863s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-jbbnv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-jbbnv -- nslookup kubernetes.io: (1.9741597s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-zbzbd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-zbzbd -- nslookup kubernetes.io: (1.5671245s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-jbbnv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-zbzbd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-jbbnv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-zbzbd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-jbbnv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-jbbnv -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-zbzbd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-292000 -- exec busybox-fc5497c4f-zbzbd -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.59s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (67.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-292000 -v 3 --alsologtostderr
E0719 04:32:22.850663   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 04:32:39.640370   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-292000 -v 3 --alsologtostderr: (1m4.3041574s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr: (3.2016783s)
--- PASS: TestMultiNode/serial/AddNode (67.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-292000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5156981s)
--- PASS: TestMultiNode/serial/ProfileList (1.52s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (42.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 status --output json --alsologtostderr: (3.0901918s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp testdata\cp-test.txt multinode-292000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp testdata\cp-test.txt multinode-292000:/home/docker/cp-test.txt: (1.2419349s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test.txt": (1.1842227s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile452976069\001\cp-test_multinode-292000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile452976069\001\cp-test_multinode-292000.txt: (1.2369851s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test.txt": (1.1983695s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000:/home/docker/cp-test.txt multinode-292000-m02:/home/docker/cp-test_multinode-292000_multinode-292000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000:/home/docker/cp-test.txt multinode-292000-m02:/home/docker/cp-test_multinode-292000_multinode-292000-m02.txt: (1.7570254s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test.txt": (1.1849074s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test_multinode-292000_multinode-292000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test_multinode-292000_multinode-292000-m02.txt": (1.2087757s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000:/home/docker/cp-test.txt multinode-292000-m03:/home/docker/cp-test_multinode-292000_multinode-292000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000:/home/docker/cp-test.txt multinode-292000-m03:/home/docker/cp-test_multinode-292000_multinode-292000-m03.txt: (1.7366977s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test.txt": (1.1811185s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test_multinode-292000_multinode-292000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test_multinode-292000_multinode-292000-m03.txt": (1.1999851s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp testdata\cp-test.txt multinode-292000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp testdata\cp-test.txt multinode-292000-m02:/home/docker/cp-test.txt: (1.2196431s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test.txt": (1.1992783s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile452976069\001\cp-test_multinode-292000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile452976069\001\cp-test_multinode-292000-m02.txt: (1.2275727s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test.txt": (1.1841787s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m02:/home/docker/cp-test.txt multinode-292000:/home/docker/cp-test_multinode-292000-m02_multinode-292000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m02:/home/docker/cp-test.txt multinode-292000:/home/docker/cp-test_multinode-292000-m02_multinode-292000.txt: (1.7587529s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test.txt": (1.1683952s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test_multinode-292000-m02_multinode-292000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test_multinode-292000-m02_multinode-292000.txt": (1.1982245s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m02:/home/docker/cp-test.txt multinode-292000-m03:/home/docker/cp-test_multinode-292000-m02_multinode-292000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m02:/home/docker/cp-test.txt multinode-292000-m03:/home/docker/cp-test_multinode-292000-m02_multinode-292000-m03.txt: (1.7014344s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test.txt": (1.1989479s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test_multinode-292000-m02_multinode-292000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test_multinode-292000-m02_multinode-292000-m03.txt": (1.2174853s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp testdata\cp-test.txt multinode-292000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp testdata\cp-test.txt multinode-292000-m03:/home/docker/cp-test.txt: (1.2948901s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test.txt": (1.2181024s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile452976069\001\cp-test_multinode-292000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile452976069\001\cp-test_multinode-292000-m03.txt: (1.2427896s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test.txt": (1.2426486s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m03:/home/docker/cp-test.txt multinode-292000:/home/docker/cp-test_multinode-292000-m03_multinode-292000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m03:/home/docker/cp-test.txt multinode-292000:/home/docker/cp-test_multinode-292000-m03_multinode-292000.txt: (1.7752691s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test.txt": (1.2267917s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test_multinode-292000-m03_multinode-292000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000 "sudo cat /home/docker/cp-test_multinode-292000-m03_multinode-292000.txt": (1.2341232s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m03:/home/docker/cp-test.txt multinode-292000-m02:/home/docker/cp-test_multinode-292000-m03_multinode-292000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 cp multinode-292000-m03:/home/docker/cp-test.txt multinode-292000-m02:/home/docker/cp-test_multinode-292000-m03_multinode-292000-m02.txt: (1.7930109s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m03 "sudo cat /home/docker/cp-test.txt": (1.2387671s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test_multinode-292000-m03_multinode-292000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 ssh -n multinode-292000-m02 "sudo cat /home/docker/cp-test_multinode-292000-m03_multinode-292000-m02.txt": (1.260225s)
--- PASS: TestMultiNode/serial/CopyFile (42.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 node stop m03: (2.2992492s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-292000 status: exit status 7 (2.3351124s)

                                                
                                                
-- stdout --
	multinode-292000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-292000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-292000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:34:12.661544   13588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr: exit status 7 (2.3664941s)

                                                
                                                
-- stdout --
	multinode-292000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-292000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-292000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:34:15.005810    1148 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 04:34:15.094286    1148 out.go:291] Setting OutFile to fd 1008 ...
	I0719 04:34:15.095207    1148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:34:15.095207    1148 out.go:304] Setting ErrFile to fd 996...
	I0719 04:34:15.095207    1148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:34:15.111837    1148 out.go:298] Setting JSON to false
	I0719 04:34:15.111837    1148 mustload.go:65] Loading cluster: multinode-292000
	I0719 04:34:15.111837    1148 notify.go:220] Checking for updates...
	I0719 04:34:15.112662    1148 config.go:182] Loaded profile config "multinode-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:34:15.112662    1148 status.go:255] checking status of multinode-292000 ...
	I0719 04:34:15.136657    1148 cli_runner.go:164] Run: docker container inspect multinode-292000 --format={{.State.Status}}
	I0719 04:34:15.326588    1148 status.go:330] multinode-292000 host status = "Running" (err=<nil>)
	I0719 04:34:15.326657    1148 host.go:66] Checking if "multinode-292000" exists ...
	I0719 04:34:15.338008    1148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292000
	I0719 04:34:15.513766    1148 host.go:66] Checking if "multinode-292000" exists ...
	I0719 04:34:15.529224    1148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:34:15.539978    1148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292000
	I0719 04:34:15.730759    1148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65039 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-292000\id_rsa Username:docker}
	I0719 04:34:15.874130    1148 ssh_runner.go:195] Run: systemctl --version
	I0719 04:34:15.899549    1148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:34:15.935208    1148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-292000
	I0719 04:34:16.117186    1148 kubeconfig.go:125] found "multinode-292000" server: "https://127.0.0.1:65038"
	I0719 04:34:16.117186    1148 api_server.go:166] Checking apiserver status ...
	I0719 04:34:16.130283    1148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:34:16.171934    1148 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2542/cgroup
	I0719 04:34:16.197604    1148 api_server.go:182] apiserver freezer: "7:freezer:/docker/d7b2d9897a9598e65aaa11d3fefcb49d59dfc382d16027cffae521aa0917a044/kubepods/burstable/pod619513e87dcc1ee8b7326c509a3059b7/c92688b3f82720d5c3ac4f495c2626b0921476bdca52a34c21e9a64aae9f05ef"
	I0719 04:34:16.210374    1148 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d7b2d9897a9598e65aaa11d3fefcb49d59dfc382d16027cffae521aa0917a044/kubepods/burstable/pod619513e87dcc1ee8b7326c509a3059b7/c92688b3f82720d5c3ac4f495c2626b0921476bdca52a34c21e9a64aae9f05ef/freezer.state
	I0719 04:34:16.235015    1148 api_server.go:204] freezer state: "THAWED"
	I0719 04:34:16.235015    1148 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65038/healthz ...
	I0719 04:34:16.247019    1148 api_server.go:279] https://127.0.0.1:65038/healthz returned 200:
	ok
	I0719 04:34:16.247019    1148 status.go:422] multinode-292000 apiserver status = Running (err=<nil>)
	I0719 04:34:16.247019    1148 status.go:257] multinode-292000 status: &{Name:multinode-292000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:34:16.247019    1148 status.go:255] checking status of multinode-292000-m02 ...
	I0719 04:34:16.267010    1148 cli_runner.go:164] Run: docker container inspect multinode-292000-m02 --format={{.State.Status}}
	I0719 04:34:16.455957    1148 status.go:330] multinode-292000-m02 host status = "Running" (err=<nil>)
	I0719 04:34:16.455957    1148 host.go:66] Checking if "multinode-292000-m02" exists ...
	I0719 04:34:16.468199    1148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292000-m02
	I0719 04:34:16.657541    1148 host.go:66] Checking if "multinode-292000-m02" exists ...
	I0719 04:34:16.673637    1148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:34:16.682661    1148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292000-m02
	I0719 04:34:16.844608    1148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65093 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-292000-m02\id_rsa Username:docker}
	I0719 04:34:16.985975    1148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:34:17.011014    1148 status.go:257] multinode-292000-m02 status: &{Name:multinode-292000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:34:17.011014    1148 status.go:255] checking status of multinode-292000-m03 ...
	I0719 04:34:17.030697    1148 cli_runner.go:164] Run: docker container inspect multinode-292000-m03 --format={{.State.Status}}
	I0719 04:34:17.230763    1148 status.go:330] multinode-292000-m03 host status = "Stopped" (err=<nil>)
	I0719 04:34:17.230763    1148 status.go:343] host is not running, skipping remaining checks
	I0719 04:34:17.230763    1148 status.go:257] multinode-292000-m03 status: &{Name:multinode-292000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (7.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 node start m03 -v=7 --alsologtostderr: (26.4376443s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 status -v=7 --alsologtostderr: (3.0378035s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (141.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-292000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-292000
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-292000: (26.3562153s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-292000 --wait=true -v=8 --alsologtostderr
E0719 04:36:00.319521   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-292000 --wait=true -v=8 --alsologtostderr: (1m55.0656359s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-292000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (141.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (14.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 node delete m03: (11.4917807s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr: (2.2188193s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (14.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 stop
E0719 04:37:39.635967   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 stop: (24.5575514s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-292000 status: exit status 7 (624.9307ms)

                                                
                                                
-- stdout --
	multinode-292000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-292000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:37:47.616025    6472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr: exit status 7 (656.2035ms)

                                                
                                                
-- stdout --
	multinode-292000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-292000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:37:48.239560    6360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 04:37:48.332202    6360 out.go:291] Setting OutFile to fd 784 ...
	I0719 04:37:48.332859    6360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:37:48.332859    6360 out.go:304] Setting ErrFile to fd 600...
	I0719 04:37:48.332859    6360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:37:48.349019    6360 out.go:298] Setting JSON to false
	I0719 04:37:48.349019    6360 mustload.go:65] Loading cluster: multinode-292000
	I0719 04:37:48.349019    6360 notify.go:220] Checking for updates...
	I0719 04:37:48.350055    6360 config.go:182] Loaded profile config "multinode-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:37:48.350055    6360 status.go:255] checking status of multinode-292000 ...
	I0719 04:37:48.371191    6360 cli_runner.go:164] Run: docker container inspect multinode-292000 --format={{.State.Status}}
	I0719 04:37:48.563563    6360 status.go:330] multinode-292000 host status = "Stopped" (err=<nil>)
	I0719 04:37:48.563563    6360 status.go:343] host is not running, skipping remaining checks
	I0719 04:37:48.563563    6360 status.go:257] multinode-292000 status: &{Name:multinode-292000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:37:48.563563    6360 status.go:255] checking status of multinode-292000-m02 ...
	I0719 04:37:48.582612    6360 cli_runner.go:164] Run: docker container inspect multinode-292000-m02 --format={{.State.Status}}
	I0719 04:37:48.767159    6360 status.go:330] multinode-292000-m02 host status = "Stopped" (err=<nil>)
	I0719 04:37:48.767159    6360 status.go:343] host is not running, skipping remaining checks
	I0719 04:37:48.767159    6360 status.go:257] multinode-292000-m02 status: &{Name:multinode-292000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-292000 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-292000 --wait=true -v=8 --alsologtostderr --driver=docker: (1m17.4674795s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-292000 status --alsologtostderr: (2.1280531s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (77.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-292000
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-292000-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-292000-m02 --driver=docker: exit status 14 (305.9953ms)

                                                
                                                
-- stdout --
	* [multinode-292000-m02] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:39:09.177552   13672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Profile name 'multinode-292000-m02' is duplicated with machine name 'multinode-292000-m02' in profile 'multinode-292000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-292000-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-292000-m03 --driver=docker: (1m9.7880961s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-292000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-292000: exit status 80 (1.1023332s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-292000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:40:19.273951   10788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-292000-m03 already exists in multinode-292000-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_node_2bbdfd0e0a46af455ae5a771b1270736051e61d9_7.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-292000-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-292000-m03: (5.6872643s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (77.13s)

                                                
                                    
x
+
TestPreload (191.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-315200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E0719 04:41:00.307394   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 04:42:39.638304   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-315200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m8.2172119s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-315200 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-315200 image pull gcr.io/k8s-minikube/busybox: (2.5455107s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-315200
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-315200: (12.4164438s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-315200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-315200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (41.7015282s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-315200 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-315200 image list: (1.1806531s)
helpers_test.go:175: Cleaning up "test-preload-315200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-315200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-315200: (5.6438877s)
--- PASS: TestPreload (191.71s)

                                                
                                    
x
+
TestScheduledStopWindows (140.58s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-800500 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-800500 --memory=2048 --driver=docker: (1m10.0051569s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-800500 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-800500 --schedule 5m: (1.4301778s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-800500 -n scheduled-stop-800500
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-800500 -n scheduled-stop-800500: (1.3260884s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-800500 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-800500 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.2174621s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-800500 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-800500 --schedule 5s: (1.4247905s)
E0719 04:45:43.525786   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 04:46:00.321472   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-800500
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-800500: exit status 7 (458.5241ms)

                                                
                                                
-- stdout --
	scheduled-stop-800500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:46:03.546517    2388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-800500 -n scheduled-stop-800500
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-800500 -n scheduled-stop-800500: exit status 7 (418.6359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:46:03.995305    9108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-800500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-800500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-800500: (4.2758432s)
--- PASS: TestScheduledStopWindows (140.58s)

                                                
                                    
x
+
TestInsufficientStorage (56.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-072600 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-072600 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (49.349851s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b3ec2d55-1e6e-4721-9a5b-8e7e3fb2def4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-072600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0734b5b-b48e-4b01-9fe3-ec9d49bb598d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"aa907290-5c84-4535-ae05-e28401ec6700","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7ca928b1-4b79-4ffe-bd8f-12e7b81eaf3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"e23777a2-1525-40a5-b351-f9fb034b1a12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"3b3879bc-7e4e-4354-afa3-2a3aef5231d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cc35be2b-4e12-4417-ac4a-4a8a95f2c3f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"48811c84-ed98-4f56-a095-09ddfbe609a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fbdeb85a-5f17-4bf1-a347-4a1a7305c616","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc6c6b69-3a5f-4577-a11f-752aefea1dfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"b3b0aef9-23a9-4c93-9b82-9bcfd9a30edd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-072600\" primary control-plane node in \"insufficient-storage-072600\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0a4aeb5-0252-410e-9e0d-c13552a5bc77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721324606-19298 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8038a1e-79d7-42b1-afa2-1acb16fd211b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cb12aa5-e806-41d3-82ff-f928166ae883","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:46:08.699721    3332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-072600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-072600 --output=json --layout=cluster: exit status 7 (1.1955933s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-072600","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-072600","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:46:58.050626   14140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 04:46:59.072879   14140 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-072600" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-072600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-072600 --output=json --layout=cluster: exit status 7 (1.2589457s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-072600","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-072600","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:46:59.246088     120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 04:47:00.334273     120 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-072600" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	E0719 04:47:00.371332     120 status.go:560] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\insufficient-storage-072600\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-072600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-072600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-072600: (4.468057s)
--- PASS: TestInsufficientStorage (56.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (440.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1789728186.exe start -p running-upgrade-430600 --memory=2200 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1789728186.exe start -p running-upgrade-430600 --memory=2200 --vm-driver=docker: (4m33.0436698s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-430600 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-430600 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m23.5153755s)
helpers_test.go:175: Cleaning up "running-upgrade-430600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-430600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-430600: (21.8443548s)
--- PASS: TestRunningBinaryUpgrade (440.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (596.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-623300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
E0719 04:57:39.652735   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-623300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (2m47.0318085s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-623300
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-623300: (20.3883942s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-623300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-623300 status --format={{.Host}}: exit status 7 (639.8009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:00:18.620855    1960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-623300 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker
E0719 05:01:00.316997   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-623300 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker: (5m51.4006753s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-623300 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-623300 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-623300 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (347.5313ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-623300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:06:10.828920    2572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-623300
	    minikube start -p kubernetes-upgrade-623300 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6233002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-623300 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-623300 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-623300 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker: (48.0032868s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-623300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-623300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-623300: (8.5176532s)
--- PASS: TestKubernetesUpgrade (596.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (334.77s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1652467370.exe start -p missing-upgrade-143000 --memory=2200 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1652467370.exe start -p missing-upgrade-143000 --memory=2200 --driver=docker: (1m39.6521787s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-143000
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-143000: (12.2420622s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-143000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-143000 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-143000 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m34.154061s)
helpers_test.go:175: Cleaning up "missing-upgrade-143000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-143000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-143000: (7.3085083s)
--- PASS: TestMissingContainerUpgrade (334.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (339.4915ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-430600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:47:04.986459    7536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (139.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --driver=docker: (2m17.5399042s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-430600 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-430600 status -o json: (1.5376015s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (139.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (416.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.753006667.exe start -p stopped-upgrade-430600 --memory=2200 --vm-driver=docker
E0719 04:47:39.649615   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 04:49:02.862236   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.753006667.exe start -p stopped-upgrade-430600 --memory=2200 --vm-driver=docker: (4m31.8313746s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.753006667.exe -p stopped-upgrade-430600 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.753006667.exe -p stopped-upgrade-430600 stop: (13.5859876s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-430600 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-430600 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m11.0612494s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (416.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --no-kubernetes --driver=docker: (59.5004608s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-430600 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-430600 status -o json: exit status 2 (1.5263114s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-430600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:50:23.904617    5984 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-430600
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-430600: (6.610292s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --no-kubernetes --driver=docker: (52.4885172s)
--- PASS: TestNoKubernetes/serial/Start (52.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-430600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-430600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.2616823s)

                                                
                                                
** stderr ** 
	W0719 04:51:24.519206   13788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (12.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (6.2745247s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (6.1515523s)
--- PASS: TestNoKubernetes/serial/ProfileList (12.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (6.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-430600
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-430600: (6.8278558s)
--- PASS: TestNoKubernetes/serial/Stop (6.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (16.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-430600 --driver=docker: (16.2610104s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (16.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-430600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-430600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.2454782s)

                                                
                                                
** stderr ** 
	W0719 04:52:01.300863    4596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (5.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-430600
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-430600: (5.0188165s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (5.02s)

                                                
                                    
x
+
TestPause/serial/Start (157.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-392900 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-392900 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m37.8213737s)
--- PASS: TestPause/serial/Start (157.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (71.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-392900 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-392900 --alsologtostderr -v=1 --driver=docker: (1m11.1627434s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (71.18s)

                                                
                                    
x
+
TestPause/serial/Pause (2.08s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-392900 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-392900 --alsologtostderr -v=5: (2.0768776s)
--- PASS: TestPause/serial/Pause (2.08s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-392900 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-392900 --output=json --layout=cluster: exit status 2 (1.5572483s)

                                                
                                                
-- stdout --
	{"Name":"pause-392900","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-392900","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:58:16.334605    5728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (1.56s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-392900 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-392900 --alsologtostderr -v=5: (1.8665481s)
--- PASS: TestPause/serial/Unpause (1.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.18s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-392900 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-392900 --alsologtostderr -v=5: (2.1795435s)
--- PASS: TestPause/serial/PauseAgain (2.18s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (41.33s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-392900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-392900 --alsologtostderr -v=5: (41.3247029s)
--- PASS: TestPause/serial/DeletePaused (41.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (153.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (2m33.0136802s)
--- PASS: TestNetworkPlugins/group/auto/Start (153.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (194.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (3m14.8210138s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (194.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-255400 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-255400 "pgrep -a kubelet": (1.697299s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (21.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-255400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-s45ft" [d9d34470-8cb2-4c9b-8c1e-2819dd581cfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-s45ft" [d9d34470-8cb2-4c9b-8c1e-2819dd581cfd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 21.0231034s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (21.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (223.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (3m43.5503185s)
--- PASS: TestNetworkPlugins/group/calico/Start (223.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wlxvp" [8aa17a9e-56f8-42e4-974d-3cf8d5bb5141] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0160772s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-255400 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-255400 "pgrep -a kubelet": (1.6461341s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (29.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-255400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m7448" [fc1544f1-29f9-4e92-9fd1-8adc3e1b5310] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m7448" [fc1544f1-29f9-4e92-9fd1-8adc3e1b5310] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 28.1753322s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (29.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (156.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (2m36.5811499s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (156.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (113.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m53.3619624s)
--- PASS: TestNetworkPlugins/group/false/Start (113.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wn5mn" [efd07a78-b51e-48e9-98eb-ade79749c936] Running
E0719 05:05:42.872487   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.0250935s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-255400 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-255400 "pgrep -a kubelet": (1.3631442s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (20.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-255400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7fxcf" [f962aa5e-1103-4e86-b7ad-e47fe45fafe6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7fxcf" [f962aa5e-1103-4e86-b7ad-e47fe45fafe6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 20.020837s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (20.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-255400 "pgrep -a kubelet"
E0719 05:06:00.321098   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-255400 "pgrep -a kubelet": (1.3954572s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (20.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-255400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2sw6c" [895612d4-541d-42e8-8a6e-be2305509f7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2sw6c" [895612d4-541d-42e8-8a6e-be2305509f7c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 20.0785764s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (20.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-255400 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-255400 "pgrep -a kubelet": (1.4810809s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (23.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-255400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dw5fz" [4426689a-2172-4027-8210-da188443d99d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 05:07:06.690439   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-255400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-6bc787d567-dw5fz" [4426689a-2172-4027-8210-da188443d99d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 23.0176139s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (23.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (181.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (3m1.6751446s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (181.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (161.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (2m41.5998989s)
--- PASS: TestNetworkPlugins/group/flannel/Start (161.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (138.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E0719 05:08:08.160649   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-255400\client.crt: The system cannot find the path specified.
E0719 05:08:10.617935   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:10.634222   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:10.649180   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:10.679827   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:10.727044   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:10.820246   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:10.988221   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:11.315100   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:11.963361   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:13.243688   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:15.810551   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:08:20.945224   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (2m18.4002712s)
--- PASS: TestNetworkPlugins/group/bridge/Start (138.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (135.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E0719 05:08:51.686430   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:09:30.086209   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-255400\client.crt: The system cannot find the path specified.
E0719 05:09:32.661467   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-255400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (2m15.8328354s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (135.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qr2vn" [0a72a678-3134-4767-9fdd-24f64ef481be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0098479s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-255400 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-255400 "pgrep -a kubelet": (1.3632424s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (22.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-255400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sv4mk" [08fb6599-e9e6-4872-b500-7bc0c8301b4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sv4mk" [08fb6599-e9e6-4872-b500-7bc0c8301b4a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 22.0242657s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (22.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (2.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-255400 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-255400 "pgrep -a kubelet": (2.103217s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (2.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (2.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-255400 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-255400 "pgrep -a kubelet": (2.2448066s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (2.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (24.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-255400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-255400 replace --force -f testdata\netcat-deployment.yaml: (1.0819016s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lzpv6" [5bbc3c1d-2bbe-4f55-bd6c-69b32e7ef510] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lzpv6" [5bbc3c1d-2bbe-4f55-bd6c-69b32e7ef510] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 23.0089281s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (24.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (23.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-255400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-255400 replace --force -f testdata\netcat-deployment.yaml: (1.0525552s)
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-strx2" [da197cf3-cda7-4ce3-b85a-1404edea7f9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-strx2" [da197cf3-cda7-4ce3-b85a-1404edea7f9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 22.0148539s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (23.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
E0719 05:10:40.332457   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
net_test.go:264: (dbg) Run:  kubectl --context bridge-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0719 05:10:40.415951   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
E0719 05:10:40.583314   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
E0719 05:10:40.912016   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-255400 "pgrep -a kubelet"
E0719 05:11:00.329760   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
E0719 05:11:00.805805   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
E0719 05:11:01.624977   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:01.640178   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:01.655588   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:01.686365   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:01.734291   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:01.827030   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:01.995183   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-255400 "pgrep -a kubelet": (1.992092s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (27.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-255400 replace --force -f testdata\netcat-deployment.yaml
E0719 05:11:02.322208   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:02.970706   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-l6snc" [2fcb1348-f8fa-4779-99bb-72f4bb851a03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 05:11:04.261843   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:06.828464   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:11.955436   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:11:21.292281   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
E0719 05:11:22.207885   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-6bc787d567-l6snc" [2fcb1348-f8fa-4779-99bb-72f4bb851a03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 26.0189287s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (27.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-255400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-255400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (293.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-546500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0719 05:12:02.142523   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:02.155769   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:02.171544   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:02.202583   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:02.249540   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:02.265224   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
E0719 05:12:02.341423   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:02.513487   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:02.834855   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:03.481920   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-546500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (4m53.9599431s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (293.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (216.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-857600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.0-beta.0
E0719 05:12:13.564327   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:13.936069   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-857600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.0-beta.0: (3m36.3384253s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (216.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (174.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-561200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.3
E0719 05:12:23.666089   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:12:23.805019   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:12:39.661923   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
E0719 05:12:44.293530   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-561200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.3: (2m54.1818993s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (174.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (133.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-683400 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.3
E0719 05:13:24.190843   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
E0719 05:13:25.268240   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:13:38.436590   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
E0719 05:13:45.598306   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:14:47.196671   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
E0719 05:15:06.491302   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:06.507273   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:06.522293   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:06.554114   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:06.600715   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:06.695045   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:06.868323   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:07.196701   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:07.841229   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:09.128942   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:11.262762   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:11.278501   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:11.299342   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:11.327047   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:11.376332   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:11.467303   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:11.637264   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:11.700661   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-683400 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.3: (2m13.7998144s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (133.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-561200 create -f testdata\busybox.yaml
E0719 05:15:11.967940   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f198cb4a-76b0-4ebc-86ff-7cbd3624d42d] Pending
E0719 05:15:12.609893   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [f198cb4a-76b0-4ebc-86ff-7cbd3624d42d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0719 05:15:13.893260   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:16.454356   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:15:16.828030   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:17.379268   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:17.390292   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [f198cb4a-76b0-4ebc-86ff-7cbd3624d42d] Running
E0719 05:15:17.409316   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:17.441398   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:17.488623   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:17.581991   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:17.751848   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:18.082830   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:18.724425   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:20.009951   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:15:21.579792   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0213719s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-561200 exec busybox -- /bin/sh -c "ulimit -n"
E0719 05:15:22.579117   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-561200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-561200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9818397s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-561200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-561200 --alsologtostderr -v=3
E0719 05:15:27.079268   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:15:27.703965   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-561200 --alsologtostderr -v=3: (13.1699706s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-683400 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [08136f99-e04d-446f-9562-c5557a884e79] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0719 05:15:31.835222   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [08136f99-e04d-446f-9562-c5557a884e79] Running
E0719 05:15:37.945391   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.014613s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-683400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-561200 -n embed-certs-561200
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-561200 -n embed-certs-561200: exit status 7 (529.5158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:15:39.424360    5520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-561200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-683400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0719 05:15:40.227930   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-683400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.6537261s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-683400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (294.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-561200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-561200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.3: (4m53.1627857s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-561200 -n embed-certs-561200
E0719 05:20:34.305309   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-561200 -n embed-certs-561200: (1.5551885s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (294.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-683400 --alsologtostderr -v=3
E0719 05:15:47.567841   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-683400 --alsologtostderr -v=3: (13.3641428s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-857600 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a484bf58-dcac-43f9-81d6-9e9bbfcea8e5] Pending
helpers_test.go:344: "busybox" [a484bf58-dcac-43f9-81d6-9e9bbfcea8e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0719 05:15:52.323703   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [a484bf58-dcac-43f9-81d6-9e9bbfcea8e5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.0211565s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-857600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400: exit status 7 (512.8334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:15:56.373204    8380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-683400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-683400 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.3
E0719 05:15:58.430269   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:16:00.326739   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-365100\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-683400 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.3: (4m56.2534138s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400: (1.626657s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (5.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-857600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0719 05:16:01.624413   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:16:03.100310   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:03.116290   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:03.132307   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:03.163301   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:03.210301   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:03.304326   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:03.479246   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:03.807290   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:04.449056   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:05.740731   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-857600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.1179286s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-857600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (5.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-857600 --alsologtostderr -v=3
E0719 05:16:08.034515   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
E0719 05:16:08.301267   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:13.422189   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-857600 --alsologtostderr -v=3: (16.1743583s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-857600 -n no-preload-857600
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-857600 -n no-preload-857600: exit status 7 (558.7684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:16:22.519112    7612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-857600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0719 05:16:23.675286   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-857600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.0-beta.0
E0719 05:16:28.531815   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:16:29.441811   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:16:33.290487   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:16:39.403038   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
E0719 05:16:44.160874   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
E0719 05:16:46.121389   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-857600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.0-beta.0: (4m57.1680448s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-857600 -n no-preload-857600
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-857600 -n no-preload-857600: (1.6623348s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (15.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-546500 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [88b026b1-f4db-4546-b8bf-e8139d58655a] Pending
helpers_test.go:344: "busybox" [88b026b1-f4db-4546-b8bf-e8139d58655a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0719 05:17:02.146037   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\false-255400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [88b026b1-f4db-4546-b8bf-e8139d58655a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 14.0091696s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-546500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (15.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-546500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-546500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.7812899s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-546500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-546500 --alsologtostderr -v=3
E0719 05:17:25.133626   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-546500 --alsologtostderr -v=3: (14.326213s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-546500 -n old-k8s-version-546500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-546500 -n old-k8s-version-546500: exit status 7 (478.7202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:17:29.708975   13072 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-546500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-28zsx" [1a796913-6c9e-42c6-a7d7-4a9d1cbcfad2] Running
E0719 05:20:39.075624   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\enable-default-cni-255400\client.crt: The system cannot find the path specified.
E0719 05:20:40.228722   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0204666s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-28zsx" [1a796913-6c9e-42c6-a7d7-4a9d1cbcfad2] Running
E0719 05:20:45.175554   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\bridge-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0202557s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-561200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-561200 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (10.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-561200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-561200 --alsologtostderr -v=1: (1.9951834s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-561200 -n embed-certs-561200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-561200 -n embed-certs-561200: exit status 2 (1.5225081s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:20:48.854755   13300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-561200 -n embed-certs-561200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-561200 -n embed-certs-561200: exit status 2 (1.4795882s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:20:50.360685    4404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-561200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-561200 --alsologtostderr -v=1: (1.7989266s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-561200 -n embed-certs-561200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-561200 -n embed-certs-561200: (1.8509002s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-561200 -n embed-certs-561200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-561200 -n embed-certs-561200: (1.5231728s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (10.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-56wcz" [558bb983-8f8c-42b6-a3fc-d07c17f48499] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0111521s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-56wcz" [558bb983-8f8c-42b6-a3fc-d07c17f48499] Running
E0719 05:21:01.629212   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\custom-flannel-255400\client.crt: The system cannot find the path specified.
E0719 05:21:03.106727   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0172388s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-683400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (81.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-800400 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-800400 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.0-beta.0: (1m21.9223432s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (81.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-683400 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-683400 image list --format=json: (1.0602322s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (11.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-683400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-683400 --alsologtostderr -v=1: (2.4266413s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400: exit status 2 (1.5985205s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:21:10.588653   13716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400: exit status 2 (1.6199402s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:21:12.190977    5804 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-683400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-683400 --alsologtostderr -v=1: (2.7480228s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400: (1.7965001s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-683400 -n default-k8s-diff-port-683400: (1.5442694s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (11.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-bxrdv" [50bc5bb6-a60b-4837-ab3d-44778e1e6319] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0225343s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-bxrdv" [50bc5bb6-a60b-4837-ab3d-44778e1e6319] Running
E0719 05:21:30.920870   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubenet-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0158407s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-857600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-857600 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-857600 image list --format=json: (1.0326329s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-800400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-800400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.6300057s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-800400 --alsologtostderr -v=3
E0719 05:22:39.654795   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-172900\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-800400 --alsologtostderr -v=3: (13.0885521s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-800400 -n newest-cni-800400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-800400 -n newest-cni-800400: exit status 7 (448.5688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:22:45.088774   14500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-800400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-800400 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.0-beta.0
E0719 05:23:09.302390   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-255400\client.crt: The system cannot find the path specified.
E0719 05:23:10.622659   10972 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-255400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-800400 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.0-beta.0: (45.2076022s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-800400 -n newest-cni-800400
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-800400 -n newest-cni-800400: (1.7865071s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-800400 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-800400 image list --format=json: (1.1234304s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (11.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-800400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-800400 --alsologtostderr -v=1: (2.0385227s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-800400 -n newest-cni-800400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-800400 -n newest-cni-800400: exit status 2 (1.7268486s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:23:36.445257   10672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-800400 -n newest-cni-800400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-800400 -n newest-cni-800400: exit status 2 (1.7081446s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:23:38.214283    9316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-800400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-800400 --alsologtostderr -v=1: (2.159953s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-800400 -n newest-cni-800400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-800400 -n newest-cni-800400: (2.1587925s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-800400 -n newest-cni-800400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-800400 -n newest-cni-800400: (1.9719304s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (11.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-qbzb4" [81ecb5b4-7b43-4c17-a9c5-c154d2e26ca3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0188667s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-qbzb4" [81ecb5b4-7b43-4c17-a9c5-c154d2e26ca3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012775s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-546500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-546500 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-546500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-546500 --alsologtostderr -v=1: (1.8354112s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-546500 -n old-k8s-version-546500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-546500 -n old-k8s-version-546500: exit status 2 (1.4129866s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:24:52.758581    4060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-546500 -n old-k8s-version-546500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-546500 -n old-k8s-version-546500: exit status 2 (1.4182253s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:24:54.179222    8464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-546500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-546500 --alsologtostderr -v=1: (1.7850457s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-546500 -n old-k8s-version-546500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-546500 -n old-k8s-version-546500: (2.1098053s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-546500 -n old-k8s-version-546500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-546500 -n old-k8s-version-546500: (1.4699311s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (10.03s)

                                                
                                    

Test skip (27/348)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 37.3589ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-vwsgf" [258c6e2b-e69a-4ef0-8923-d9e8026d5cdc] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0126398s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bctnh" [3386906c-8bbe-4e3d-8aab-6131b15a05c7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0160522s
addons_test.go:342: (dbg) Run:  kubectl --context addons-172900 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-172900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-172900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.4003881s)
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-172900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-172900 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-172900 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6bf5ee3f-2ecc-4dbe-bcaa-d9a91e1c7dc5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6bf5ee3f-2ecc-4dbe-bcaa-d9a91e1c7dc5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 18.0292335s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-172900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-172900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.3435837s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-172900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0719 03:38:15.760740    8160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (20.47s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-365100 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-365100 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 7308: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-365100 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-365100 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-4t475" [8a9d758e-331d-404c-bf2e-77425f3af6e4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-4t475" [8a9d758e-331d-404c-bf2e-77425f3af6e4] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.0118383s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.52s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (20.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-255400 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W0719 04:52:09.672602    7516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W0719 04:52:09.974023    8104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W0719 04:52:10.267682   14748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W0719 04:52:11.121720   10148 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W0719 04:52:11.478124    5248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W0719 04:52:13.184170    1572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W0719 04:52:13.572339   15212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W0719 04:52:13.924770    9328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W0719 04:52:14.262768    6252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W0719 04:52:14.602094    9236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-255400" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W0719 04:52:17.483708   10916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W0719 04:52:17.864937    4604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W0719 04:52:18.235094   11428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W0719 04:52:18.589709   14648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W0719 04:52:18.960990    4592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-255400

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W0719 04:52:19.760753   14764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W0719 04:52:20.137954    4832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W0719 04:52:20.508597    4800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W0719 04:52:20.890442    6216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W0719 04:52:21.242497   13984 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W0719 04:52:21.612458    7104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W0719 04:52:21.951459   13408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W0719 04:52:22.297491   10708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W0719 04:52:22.639761    7992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W0719 04:52:22.980304    1588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W0719 04:52:23.308312   14704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W0719 04:52:23.639305    8748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W0719 04:52:23.983878    7368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W0719 04:52:24.326879    1548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W0719 04:52:24.717249    7292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W0719 04:52:25.160314    7068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W0719 04:52:25.571488    3280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                

                                                
                                                
>>> host: crio config:
W0719 04:52:26.041494    2356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-255400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255400"

                                                
                                                
----------------------- debugLogs end: cilium-255400 [took: 18.4752231s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-255400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-255400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-255400: (1.7756824s)
--- SKIP: TestNetworkPlugins/group/cilium (20.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-545800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-545800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-545800: (2.2931199s)
--- SKIP: TestStartStop/group/disable-driver-mounts (2.29s)

                                                
                                    
Copied to clipboard