Test Report: Docker_Windows 18872

                    
                      e5a45a5ea9a7bb508c00b9c70a33890e15fde7d2:2024-05-14:34460
                    
                

Test fail (5/339)

Order failed test Duration
55 TestErrorSpam/setup 65.16
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 6.36
86 TestFunctional/parallel/ConfigCmd 1.61
324 TestStartStop/group/old-k8s-version/serial/SecondStart 430.55
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 44.28
x
+
TestErrorSpam/setup (65.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-065300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-065300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 --driver=docker: (1m5.1561154s)
error_spam_test.go:96: unexpected stderr: "W0513 22:33:10.846502    9612 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-065300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=18872
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-065300" primary control-plane node in "nospam-065300" cluster
* Pulling base image v0.0.44 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.1.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-065300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0513 22:33:10.846502    9612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (65.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (6.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-950600
helpers_test.go:235: (dbg) docker inspect functional-950600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e06804a4dc86fc078e3412a48c2045ae16a78ea0f1494f918caae3f0b4b602a0",
	        "Created": "2024-05-13T22:35:27.400068316Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 22436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-13T22:35:28.088493852Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5a6e59a9bdc0d32876fd51e3702c6cb16f38b145ed5528e5f0bfb1de21e70803",
	        "ResolvConfPath": "/var/lib/docker/containers/e06804a4dc86fc078e3412a48c2045ae16a78ea0f1494f918caae3f0b4b602a0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e06804a4dc86fc078e3412a48c2045ae16a78ea0f1494f918caae3f0b4b602a0/hostname",
	        "HostsPath": "/var/lib/docker/containers/e06804a4dc86fc078e3412a48c2045ae16a78ea0f1494f918caae3f0b4b602a0/hosts",
	        "LogPath": "/var/lib/docker/containers/e06804a4dc86fc078e3412a48c2045ae16a78ea0f1494f918caae3f0b4b602a0/e06804a4dc86fc078e3412a48c2045ae16a78ea0f1494f918caae3f0b4b602a0-json.log",
	        "Name": "/functional-950600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-950600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-950600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5f3e13c8030b29172f842c906492c3562e9ffff22596a974c9b8e1c16b8e4ddb-init/diff:/var/lib/docker/overlay2/e3065cc89db7a8fd6915450a1724667534193c4a9eb8348f67381d1430bd11e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f3e13c8030b29172f842c906492c3562e9ffff22596a974c9b8e1c16b8e4ddb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f3e13c8030b29172f842c906492c3562e9ffff22596a974c9b8e1c16b8e4ddb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f3e13c8030b29172f842c906492c3562e9ffff22596a974c9b8e1c16b8e4ddb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-950600",
	                "Source": "/var/lib/docker/volumes/functional-950600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-950600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-950600",
	                "name.minikube.sigs.k8s.io": "functional-950600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "af50ec66c285accb667eafb9b2461d335d264c854ba7b9bae350e837aceaeecd",
	            "SandboxKey": "/var/run/docker/netns/af50ec66c285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52764"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52765"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52766"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52767"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52768"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-950600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "cdbf88f69f727fb840b0b0345e1711d0c7e7e6cc97f4595ee07ffee359d819c6",
	                    "EndpointID": "8079a074cd082ae5ab144aac1a2f43ba5be04ea1d00f9ee9920786ec1d8ef338",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-950600",
	                        "e06804a4dc86"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-950600 -n functional-950600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-950600 -n functional-950600: (1.288442s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 logs -n 25: (2.6758325s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-065300 --log_dir                                     | nospam-065300     | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:34 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-065300 --log_dir                                     | nospam-065300     | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:34 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-065300 --log_dir                                     | nospam-065300     | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:34 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-065300 --log_dir                                     | nospam-065300     | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:34 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-065300 --log_dir                                     | nospam-065300     | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:34 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-065300 --log_dir                                     | nospam-065300     | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:34 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-065300 --log_dir                                     | nospam-065300     | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:34 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-065300                                            | nospam-065300     | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:34 UTC |
	| start   | -p functional-950600                                        | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:34 UTC | 13 May 24 22:36 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-950600                                        | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:36 UTC | 13 May 24 22:37 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-950600 cache add                                 | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-950600 cache add                                 | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-950600 cache add                                 | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-950600 cache add                                 | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | minikube-local-cache-test:functional-950600                 |                   |                   |         |                     |                     |
	| cache   | functional-950600 cache delete                              | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | minikube-local-cache-test:functional-950600                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	| ssh     | functional-950600 ssh sudo                                  | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-950600                                           | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-950600 ssh                                       | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-950600 cache reload                              | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	| ssh     | functional-950600 ssh                                       | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-950600 kubectl --                                | functional-950600 | minikube4\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | --context functional-950600                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:36:21
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:36:21.788946    6152 out.go:291] Setting OutFile to fd 620 ...
	I0513 22:36:21.789472    6152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:36:21.789472    6152 out.go:304] Setting ErrFile to fd 972...
	I0513 22:36:21.789472    6152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:36:21.811697    6152 out.go:298] Setting JSON to false
	I0513 22:36:21.814061    6152 start.go:129] hostinfo: {"hostname":"minikube4","uptime":6020,"bootTime":1715633761,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 22:36:21.814061    6152 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:36:21.818185    6152 out.go:177] * [functional-950600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:36:21.820637    6152 notify.go:220] Checking for updates...
	I0513 22:36:21.823584    6152 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 22:36:21.826371    6152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:36:21.829790    6152 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 22:36:21.832759    6152 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:36:21.835150    6152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:36:21.838241    6152 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:36:21.838436    6152 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:36:22.133374    6152 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 22:36:22.148021    6152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 22:36:22.489042    6152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:86 SystemTime:2024-05-13 22:36:22.450939412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 22:36:22.492761    6152 out.go:177] * Using the docker driver based on existing profile
	I0513 22:36:22.495853    6152 start.go:297] selected driver: docker
	I0513 22:36:22.495909    6152 start.go:901] validating driver "docker" against &{Name:functional-950600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-950600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:36:22.496077    6152 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 22:36:22.519931    6152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 22:36:22.830182    6152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:86 SystemTime:2024-05-13 22:36:22.789597587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 22:36:22.936655    6152 cni.go:84] Creating CNI manager for ""
	I0513 22:36:22.936655    6152 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:36:22.936804    6152 start.go:340] cluster config:
	{Name:functional-950600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-950600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:36:22.942805    6152 out.go:177] * Starting "functional-950600" primary control-plane node in "functional-950600" cluster
	I0513 22:36:22.945490    6152 cache.go:121] Beginning downloading kic base image for docker with docker
	I0513 22:36:22.947504    6152 out.go:177] * Pulling base image v0.0.44 ...
	I0513 22:36:22.950538    6152 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:36:22.950538    6152 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon
	I0513 22:36:22.950538    6152 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:36:22.950538    6152 cache.go:56] Caching tarball of preloaded images
	I0513 22:36:22.950538    6152 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 22:36:22.950538    6152 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 22:36:22.951691    6152 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\config.json ...
	I0513 22:36:23.126517    6152 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon, skipping pull
	I0513 22:36:23.126517    6152 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e exists in daemon, skipping load
	I0513 22:36:23.126517    6152 cache.go:194] Successfully downloaded all kic artifacts
	I0513 22:36:23.127488    6152 start.go:360] acquireMachinesLock for functional-950600: {Name:mk9349f269fbd1c80c252542b3e65a94f0e6a292 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 22:36:23.127488    6152 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-950600"
	I0513 22:36:23.127488    6152 start.go:96] Skipping create...Using existing machine configuration
	I0513 22:36:23.127488    6152 fix.go:54] fixHost starting: 
	I0513 22:36:23.146627    6152 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
	I0513 22:36:23.326401    6152 fix.go:112] recreateIfNeeded on functional-950600: state=Running err=<nil>
	W0513 22:36:23.326401    6152 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 22:36:23.334074    6152 out.go:177] * Updating the running docker "functional-950600" container ...
	I0513 22:36:23.336541    6152 machine.go:94] provisionDockerMachine start ...
	I0513 22:36:23.344054    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:23.521165    6152 main.go:141] libmachine: Using SSH client type: native
	I0513 22:36:23.522060    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 52764 <nil> <nil>}
	I0513 22:36:23.522124    6152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 22:36:23.710835    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-950600
	
	I0513 22:36:23.710835    6152 ubuntu.go:169] provisioning hostname "functional-950600"
	I0513 22:36:23.725677    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:23.894324    6152 main.go:141] libmachine: Using SSH client type: native
	I0513 22:36:23.894324    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 52764 <nil> <nil>}
	I0513 22:36:23.894324    6152 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-950600 && echo "functional-950600" | sudo tee /etc/hostname
	I0513 22:36:24.196011    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-950600
	
	I0513 22:36:24.207047    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:24.384815    6152 main.go:141] libmachine: Using SSH client type: native
	I0513 22:36:24.384815    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 52764 <nil> <nil>}
	I0513 22:36:24.385693    6152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-950600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-950600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-950600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 22:36:24.574686    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:36:24.574762    6152 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0513 22:36:24.574889    6152 ubuntu.go:177] setting up certificates
	I0513 22:36:24.574889    6152 provision.go:84] configureAuth start
	I0513 22:36:24.591668    6152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-950600
	I0513 22:36:24.762091    6152 provision.go:143] copyHostCerts
	I0513 22:36:24.762248    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I0513 22:36:24.762584    6152 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0513 22:36:24.762584    6152 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0513 22:36:24.762584    6152 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0513 22:36:24.763827    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I0513 22:36:24.764055    6152 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0513 22:36:24.764170    6152 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0513 22:36:24.764354    6152 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 22:36:24.765330    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I0513 22:36:24.765615    6152 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0513 22:36:24.765615    6152 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0513 22:36:24.765924    6152 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0513 22:36:24.766913    6152 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-950600 san=[127.0.0.1 192.168.49.2 functional-950600 localhost minikube]
	I0513 22:36:25.013782    6152 provision.go:177] copyRemoteCerts
	I0513 22:36:25.024293    6152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 22:36:25.034002    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:25.231438    6152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
	I0513 22:36:25.358603    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 22:36:25.359280    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0513 22:36:25.397899    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 22:36:25.398341    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0513 22:36:25.440150    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 22:36:25.440150    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0513 22:36:25.479965    6152 provision.go:87] duration metric: took 905.0376ms to configureAuth
	I0513 22:36:25.480064    6152 ubuntu.go:193] setting minikube options for container-runtime
	I0513 22:36:25.480675    6152 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:36:25.492185    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:25.657812    6152 main.go:141] libmachine: Using SSH client type: native
	I0513 22:36:25.658055    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 52764 <nil> <nil>}
	I0513 22:36:25.658055    6152 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 22:36:25.841475    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0513 22:36:25.841475    6152 ubuntu.go:71] root file system type: overlay
	I0513 22:36:25.842142    6152 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 22:36:25.851396    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:26.027718    6152 main.go:141] libmachine: Using SSH client type: native
	I0513 22:36:26.028269    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 52764 <nil> <nil>}
	I0513 22:36:26.028497    6152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 22:36:26.239548    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 22:36:26.251177    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:26.415507    6152 main.go:141] libmachine: Using SSH client type: native
	I0513 22:36:26.416346    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 52764 <nil> <nil>}
	I0513 22:36:26.416346    6152 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 22:36:26.620657    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:36:26.620657    6152 machine.go:97] duration metric: took 3.283978s to provisionDockerMachine
	I0513 22:36:26.620657    6152 start.go:293] postStartSetup for "functional-950600" (driver="docker")
	I0513 22:36:26.620657    6152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 22:36:26.635650    6152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 22:36:26.643761    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:26.811965    6152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
	I0513 22:36:26.961466    6152 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 22:36:26.972070    6152 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0513 22:36:26.972070    6152 command_runner.go:130] > NAME="Ubuntu"
	I0513 22:36:26.972070    6152 command_runner.go:130] > VERSION_ID="22.04"
	I0513 22:36:26.972070    6152 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0513 22:36:26.972070    6152 command_runner.go:130] > VERSION_CODENAME=jammy
	I0513 22:36:26.972070    6152 command_runner.go:130] > ID=ubuntu
	I0513 22:36:26.972070    6152 command_runner.go:130] > ID_LIKE=debian
	I0513 22:36:26.972070    6152 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0513 22:36:26.972070    6152 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0513 22:36:26.972070    6152 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0513 22:36:26.972070    6152 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0513 22:36:26.972070    6152 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0513 22:36:26.972070    6152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0513 22:36:26.972070    6152 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0513 22:36:26.972070    6152 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0513 22:36:26.972070    6152 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0513 22:36:26.972070    6152 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0513 22:36:26.972755    6152 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0513 22:36:26.972795    6152 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem -> 158682.pem in /etc/ssl/certs
	I0513 22:36:26.972795    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem -> /etc/ssl/certs/158682.pem
	I0513 22:36:26.973964    6152 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\15868\hosts -> hosts in /etc/test/nested/copy/15868
	I0513 22:36:26.973964    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\15868\hosts -> /etc/test/nested/copy/15868/hosts
	I0513 22:36:26.988718    6152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/15868
	I0513 22:36:27.010368    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem --> /etc/ssl/certs/158682.pem (1708 bytes)
	I0513 22:36:27.048929    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\15868\hosts --> /etc/test/nested/copy/15868/hosts (40 bytes)
	I0513 22:36:27.097956    6152 start.go:296] duration metric: took 477.2791ms for postStartSetup
	I0513 22:36:27.111429    6152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 22:36:27.119209    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:27.292814    6152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
	I0513 22:36:27.413533    6152 command_runner.go:130] > 1%!
	(MISSING)I0513 22:36:27.426717    6152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0513 22:36:27.441977    6152 command_runner.go:130] > 952G
	I0513 22:36:27.441977    6152 fix.go:56] duration metric: took 4.3143072s for fixHost
	I0513 22:36:27.441977    6152 start.go:83] releasing machines lock for "functional-950600", held for 4.3143072s
	I0513 22:36:27.452289    6152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-950600
	I0513 22:36:27.622697    6152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 22:36:27.634102    6152 ssh_runner.go:195] Run: cat /version.json
	I0513 22:36:27.634865    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:27.646644    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:27.802856    6152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
	I0513 22:36:27.819557    6152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
	I0513 22:36:27.929609    6152 command_runner.go:130] > {"iso_version": "v1.33.0-1715127532-18832", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.1", "commit": "73afd464fc1f7f4bc77840abab5335e05911ae3f"}
	I0513 22:36:27.947098    6152 ssh_runner.go:195] Run: systemctl --version
	I0513 22:36:28.078577    6152 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0513 22:36:28.082730    6152 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0513 22:36:28.082832    6152 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0513 22:36:28.095542    6152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 22:36:28.108040    6152 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0513 22:36:28.108040    6152 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0513 22:36:28.108040    6152 command_runner.go:130] > Device: 91h/145d	Inode: 234         Links: 1
	I0513 22:36:28.108040    6152 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0513 22:36:28.108040    6152 command_runner.go:130] > Access: 2024-05-13 22:23:35.104373879 +0000
	I0513 22:36:28.108040    6152 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0513 22:36:28.108040    6152 command_runner.go:130] > Change: 2024-05-13 22:23:02.676410681 +0000
	I0513 22:36:28.108040    6152 command_runner.go:130] >  Birth: 2024-05-13 22:23:02.676410681 +0000
	I0513 22:36:28.121807    6152 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0513 22:36:28.137552    6152 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0513 22:36:28.140401    6152 start.go:438] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0513 22:36:28.152331    6152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 22:36:28.172460    6152 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0513 22:36:28.172501    6152 start.go:494] detecting cgroup driver to use...
	I0513 22:36:28.172501    6152 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0513 22:36:28.172501    6152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:36:28.203002    6152 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0513 22:36:28.215879    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 22:36:28.250954    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 22:36:28.272939    6152 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 22:36:28.285586    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 22:36:28.320267    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:36:28.357110    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 22:36:28.388219    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:36:28.423351    6152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 22:36:28.456880    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 22:36:28.488858    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 22:36:28.521295    6152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 22:36:28.559139    6152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 22:36:28.580437    6152 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0513 22:36:28.593561    6152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 22:36:28.624839    6152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:36:28.798090    6152 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 22:36:39.136397    6152 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.3378722s)
	I0513 22:36:39.136498    6152 start.go:494] detecting cgroup driver to use...
	I0513 22:36:39.136578    6152 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0513 22:36:39.148658    6152 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 22:36:39.175363    6152 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0513 22:36:39.175363    6152 command_runner.go:130] > [Unit]
	I0513 22:36:39.175363    6152 command_runner.go:130] > Description=Docker Application Container Engine
	I0513 22:36:39.175363    6152 command_runner.go:130] > Documentation=https://docs.docker.com
	I0513 22:36:39.175363    6152 command_runner.go:130] > BindsTo=containerd.service
	I0513 22:36:39.175363    6152 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0513 22:36:39.175363    6152 command_runner.go:130] > Wants=network-online.target
	I0513 22:36:39.175363    6152 command_runner.go:130] > Requires=docker.socket
	I0513 22:36:39.175363    6152 command_runner.go:130] > StartLimitBurst=3
	I0513 22:36:39.175363    6152 command_runner.go:130] > StartLimitIntervalSec=60
	I0513 22:36:39.175363    6152 command_runner.go:130] > [Service]
	I0513 22:36:39.175363    6152 command_runner.go:130] > Type=notify
	I0513 22:36:39.175363    6152 command_runner.go:130] > Restart=on-failure
	I0513 22:36:39.175363    6152 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0513 22:36:39.175363    6152 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0513 22:36:39.175363    6152 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0513 22:36:39.175363    6152 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0513 22:36:39.175363    6152 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0513 22:36:39.175363    6152 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0513 22:36:39.175363    6152 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0513 22:36:39.175363    6152 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0513 22:36:39.175363    6152 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0513 22:36:39.175363    6152 command_runner.go:130] > ExecStart=
	I0513 22:36:39.175363    6152 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0513 22:36:39.175934    6152 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0513 22:36:39.175934    6152 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0513 22:36:39.175934    6152 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0513 22:36:39.175934    6152 command_runner.go:130] > LimitNOFILE=infinity
	I0513 22:36:39.175934    6152 command_runner.go:130] > LimitNPROC=infinity
	I0513 22:36:39.175934    6152 command_runner.go:130] > LimitCORE=infinity
	I0513 22:36:39.175934    6152 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0513 22:36:39.175934    6152 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0513 22:36:39.176035    6152 command_runner.go:130] > TasksMax=infinity
	I0513 22:36:39.176057    6152 command_runner.go:130] > TimeoutStartSec=0
	I0513 22:36:39.176057    6152 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0513 22:36:39.176057    6152 command_runner.go:130] > Delegate=yes
	I0513 22:36:39.176057    6152 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0513 22:36:39.176128    6152 command_runner.go:130] > KillMode=process
	I0513 22:36:39.176191    6152 command_runner.go:130] > [Install]
	I0513 22:36:39.176242    6152 command_runner.go:130] > WantedBy=multi-user.target
	I0513 22:36:39.176303    6152 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0513 22:36:39.188582    6152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 22:36:39.213757    6152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:36:39.242664    6152 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0513 22:36:39.255572    6152 ssh_runner.go:195] Run: which cri-dockerd
	I0513 22:36:39.264871    6152 command_runner.go:130] > /usr/bin/cri-dockerd
	I0513 22:36:39.275899    6152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 22:36:39.295536    6152 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 22:36:39.340715    6152 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 22:36:39.550604    6152 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 22:36:39.710343    6152 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 22:36:39.710343    6152 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 22:36:39.752214    6152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:36:39.922618    6152 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 22:36:40.908055    6152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 22:36:40.953967    6152 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0513 22:36:41.001537    6152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:36:41.034836    6152 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 22:36:41.152243    6152 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 22:36:41.311967    6152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:36:41.468123    6152 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 22:36:41.509960    6152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:36:41.545609    6152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:36:41.742568    6152 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 22:36:41.891265    6152 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 22:36:41.904014    6152 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 22:36:41.915289    6152 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0513 22:36:41.915807    6152 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0513 22:36:41.915807    6152 command_runner.go:130] > Device: 9ah/154d	Inode: 722         Links: 1
	I0513 22:36:41.915807    6152 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0513 22:36:41.915896    6152 command_runner.go:130] > Access: 2024-05-13 22:36:41.748239347 +0000
	I0513 22:36:41.915896    6152 command_runner.go:130] > Modify: 2024-05-13 22:36:41.748239347 +0000
	I0513 22:36:41.915896    6152 command_runner.go:130] > Change: 2024-05-13 22:36:41.758240814 +0000
	I0513 22:36:41.915931    6152 command_runner.go:130] >  Birth: -
	I0513 22:36:41.915981    6152 start.go:562] Will wait 60s for crictl version
	I0513 22:36:41.926499    6152 ssh_runner.go:195] Run: which crictl
	I0513 22:36:41.935122    6152 command_runner.go:130] > /usr/bin/crictl
	I0513 22:36:41.945160    6152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 22:36:42.031723    6152 command_runner.go:130] > Version:  0.1.0
	I0513 22:36:42.031794    6152 command_runner.go:130] > RuntimeName:  docker
	I0513 22:36:42.031794    6152 command_runner.go:130] > RuntimeVersion:  26.1.1
	I0513 22:36:42.031794    6152 command_runner.go:130] > RuntimeApiVersion:  v1
	I0513 22:36:42.031794    6152 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.1
	RuntimeApiVersion:  v1
	I0513 22:36:42.042253    6152 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:36:42.195747    6152 command_runner.go:130] > 26.1.1
	I0513 22:36:42.213637    6152 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:36:42.329646    6152 command_runner.go:130] > 26.1.1
	I0513 22:36:42.337186    6152 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.1.1 ...
	I0513 22:36:42.345115    6152 cli_runner.go:164] Run: docker exec -t functional-950600 dig +short host.docker.internal
	I0513 22:36:42.621239    6152 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0513 22:36:42.636598    6152 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0513 22:36:42.682820    6152 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0513 22:36:42.694351    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:42.866284    6152 kubeadm.go:877] updating cluster {Name:functional-950600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-950600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0513 22:36:42.866284    6152 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:36:42.879122    6152 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:36:43.087461    6152 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0513 22:36:43.087461    6152 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0513 22:36:43.087461    6152 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0513 22:36:43.087461    6152 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0513 22:36:43.087461    6152 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0513 22:36:43.087461    6152 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0513 22:36:43.087600    6152 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0513 22:36:43.087676    6152 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:36:43.087676    6152 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 22:36:43.087763    6152 docker.go:615] Images already preloaded, skipping extraction
	I0513 22:36:43.104636    6152 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:36:43.387186    6152 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0513 22:36:43.387312    6152 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0513 22:36:43.387312    6152 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0513 22:36:43.387312    6152 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0513 22:36:43.387312    6152 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0513 22:36:43.387312    6152 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0513 22:36:43.387312    6152 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0513 22:36:43.387312    6152 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:36:43.387312    6152 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 22:36:43.387547    6152 cache_images.go:84] Images are preloaded, skipping loading
	I0513 22:36:43.387727    6152 kubeadm.go:928] updating node { 192.168.49.2 8441 v1.30.0 docker true true} ...
	I0513 22:36:43.387867    6152 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-950600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-950600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 22:36:43.405093    6152 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 22:36:43.811392    6152 command_runner.go:130] > cgroupfs
	I0513 22:36:43.811392    6152 cni.go:84] Creating CNI manager for ""
	I0513 22:36:43.811392    6152 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:36:43.811392    6152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 22:36:43.811941    6152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-950600 NodeName:functional-950600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 22:36:43.811941    6152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-950600"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 22:36:43.824136    6152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 22:36:43.906198    6152 command_runner.go:130] > kubeadm
	I0513 22:36:43.906276    6152 command_runner.go:130] > kubectl
	I0513 22:36:43.906324    6152 command_runner.go:130] > kubelet
	I0513 22:36:43.906417    6152 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 22:36:43.922186    6152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0513 22:36:43.990082    6152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0513 22:36:44.114202    6152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 22:36:44.381648    6152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0513 22:36:44.606955    6152 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0513 22:36:44.685599    6152 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0513 22:36:44.704562    6152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:36:45.404986    6152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 22:36:45.487128    6152 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600 for IP: 192.168.49.2
	I0513 22:36:45.487128    6152 certs.go:194] generating shared ca certs ...
	I0513 22:36:45.487128    6152 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:36:45.488023    6152 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0513 22:36:45.488636    6152 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0513 22:36:45.488831    6152 certs.go:256] generating profile certs ...
	I0513 22:36:45.489316    6152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.key
	I0513 22:36:45.490437    6152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\apiserver.key.ca0e590f
	I0513 22:36:45.490951    6152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\proxy-client.key
	I0513 22:36:45.491015    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 22:36:45.491063    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 22:36:45.491063    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 22:36:45.491063    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 22:36:45.491612    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 22:36:45.491761    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 22:36:45.493758    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 22:36:45.493815    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 22:36:45.493815    6152 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\15868.pem (1338 bytes)
	W0513 22:36:45.494707    6152 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\15868_empty.pem, impossibly tiny 0 bytes
	I0513 22:36:45.494866    6152 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0513 22:36:45.495553    6152 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0513 22:36:45.495822    6152 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 22:36:45.495822    6152 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0513 22:36:45.496501    6152 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem (1708 bytes)
	I0513 22:36:45.496979    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\15868.pem -> /usr/share/ca-certificates/15868.pem
	I0513 22:36:45.497236    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem -> /usr/share/ca-certificates/158682.pem
	I0513 22:36:45.497236    6152 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:36:45.498895    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 22:36:45.605651    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 22:36:45.797660    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 22:36:46.005427    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 22:36:46.189898    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0513 22:36:46.304351    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 22:36:46.399496    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 22:36:46.501856    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0513 22:36:46.605878    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\15868.pem --> /usr/share/ca-certificates/15868.pem (1338 bytes)
	I0513 22:36:46.793419    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem --> /usr/share/ca-certificates/158682.pem (1708 bytes)
	I0513 22:36:46.982567    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 22:36:47.088771    6152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 22:36:47.203408    6152 ssh_runner.go:195] Run: openssl version
	I0513 22:36:47.281334    6152 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0513 22:36:47.301435    6152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15868.pem && ln -fs /usr/share/ca-certificates/15868.pem /etc/ssl/certs/15868.pem"
	I0513 22:36:47.407827    6152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15868.pem
	I0513 22:36:47.492474    6152 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:34 /usr/share/ca-certificates/15868.pem
	I0513 22:36:47.492474    6152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:34 /usr/share/ca-certificates/15868.pem
	I0513 22:36:47.506290    6152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15868.pem
	I0513 22:36:47.593987    6152 command_runner.go:130] > 51391683
	I0513 22:36:47.606029    6152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15868.pem /etc/ssl/certs/51391683.0"
	I0513 22:36:47.706154    6152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/158682.pem && ln -fs /usr/share/ca-certificates/158682.pem /etc/ssl/certs/158682.pem"
	I0513 22:36:47.808786    6152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/158682.pem
	I0513 22:36:47.885752    6152 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:34 /usr/share/ca-certificates/158682.pem
	I0513 22:36:47.885752    6152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:34 /usr/share/ca-certificates/158682.pem
	I0513 22:36:47.903617    6152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/158682.pem
	I0513 22:36:47.982309    6152 command_runner.go:130] > 3ec20f2e
	I0513 22:36:47.997965    6152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/158682.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 22:36:48.109484    6152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 22:36:48.210204    6152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:36:48.284242    6152 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:25 /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:36:48.284338    6152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:25 /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:36:48.297743    6152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:36:48.386123    6152 command_runner.go:130] > b5213941
	I0513 22:36:48.399018    6152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 22:36:48.502558    6152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 22:36:48.582560    6152 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 22:36:48.582560    6152 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0513 22:36:48.582692    6152 command_runner.go:130] > Device: 830h/2096d	Inode: 19267       Links: 1
	I0513 22:36:48.582692    6152 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0513 22:36:48.582692    6152 command_runner.go:130] > Access: 2024-05-13 22:35:45.652471548 +0000
	I0513 22:36:48.582692    6152 command_runner.go:130] > Modify: 2024-05-13 22:35:45.652471548 +0000
	I0513 22:36:48.582692    6152 command_runner.go:130] > Change: 2024-05-13 22:35:45.652471548 +0000
	I0513 22:36:48.582692    6152 command_runner.go:130] >  Birth: 2024-05-13 22:35:45.652471548 +0000
	I0513 22:36:48.595460    6152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0513 22:36:48.614346    6152 command_runner.go:130] > Certificate will not expire
	I0513 22:36:48.626596    6152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0513 22:36:48.694578    6152 command_runner.go:130] > Certificate will not expire
	I0513 22:36:48.710474    6152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0513 22:36:48.780141    6152 command_runner.go:130] > Certificate will not expire
	I0513 22:36:48.792621    6152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0513 22:36:48.809287    6152 command_runner.go:130] > Certificate will not expire
	I0513 22:36:48.821839    6152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0513 22:36:48.885816    6152 command_runner.go:130] > Certificate will not expire
	I0513 22:36:48.898305    6152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0513 22:36:48.914623    6152 command_runner.go:130] > Certificate will not expire
	I0513 22:36:48.915042    6152 kubeadm.go:391] StartCluster: {Name:functional-950600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-950600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:36:48.923624    6152 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 22:36:49.027468    6152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0513 22:36:49.090381    6152 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0513 22:36:49.090458    6152 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0513 22:36:49.090527    6152 command_runner.go:130] > /var/lib/minikube/etcd:
	I0513 22:36:49.090527    6152 command_runner.go:130] > member
	W0513 22:36:49.090527    6152 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0513 22:36:49.090610    6152 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0513 22:36:49.090663    6152 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0513 22:36:49.103343    6152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0513 22:36:49.187395    6152 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0513 22:36:49.198364    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:49.371663    6152 kubeconfig.go:125] found "functional-950600" server: "https://127.0.0.1:52768"
	I0513 22:36:49.373027    6152 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 22:36:49.373876    6152 kapi.go:59] client config for functional-950600: &rest.Config{Host:"https://127.0.0.1:52768", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-950600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-950600\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19f8ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 22:36:49.375230    6152 cert_rotation.go:137] Starting client certificate rotation controller
	I0513 22:36:49.386247    6152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0513 22:36:49.406863    6152 kubeadm.go:624] The running cluster does not require reconfiguration: 127.0.0.1
	I0513 22:36:49.407071    6152 kubeadm.go:591] duration metric: took 316.3349ms to restartPrimaryControlPlane
	I0513 22:36:49.407121    6152 kubeadm.go:393] duration metric: took 492.0583ms to StartCluster
	I0513 22:36:49.407121    6152 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:36:49.407362    6152 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 22:36:49.408925    6152 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:36:49.411675    6152 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:36:49.411632    6152 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 22:36:49.411824    6152 addons.go:69] Setting storage-provisioner=true in profile "functional-950600"
	I0513 22:36:49.411868    6152 addons.go:69] Setting default-storageclass=true in profile "functional-950600"
	I0513 22:36:49.421797    6152 out.go:177] * Verifying Kubernetes components...
	I0513 22:36:49.411868    6152 addons.go:234] Setting addon storage-provisioner=true in "functional-950600"
	I0513 22:36:49.411868    6152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-950600"
	I0513 22:36:49.412363    6152 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	W0513 22:36:49.423791    6152 addons.go:243] addon storage-provisioner should already be in state true
	I0513 22:36:49.424094    6152 host.go:66] Checking if "functional-950600" exists ...
	I0513 22:36:49.442820    6152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:36:49.442820    6152 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
	I0513 22:36:49.449821    6152 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
	I0513 22:36:49.605863    6152 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 22:36:49.606876    6152 kapi.go:59] client config for functional-950600: &rest.Config{Host:"https://127.0.0.1:52768", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-950600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-950600\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19f8ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 22:36:49.607884    6152 addons.go:234] Setting addon default-storageclass=true in "functional-950600"
	W0513 22:36:49.607884    6152 addons.go:243] addon default-storageclass should already be in state true
	I0513 22:36:49.607884    6152 host.go:66] Checking if "functional-950600" exists ...
	I0513 22:36:49.624594    6152 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:36:49.626594    6152 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:36:49.626594    6152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 22:36:49.628599    6152 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
	I0513 22:36:49.634595    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:49.806819    6152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
	I0513 22:36:49.823255    6152 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 22:36:49.823255    6152 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 22:36:49.832368    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:49.994705    6152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
	I0513 22:36:49.997187    6152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 22:36:50.032125    6152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-950600
	I0513 22:36:50.198198    6152 node_ready.go:35] waiting up to 6m0s for node "functional-950600" to be "Ready" ...
	I0513 22:36:50.198198    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:50.198198    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:50.198198    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:50.198198    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:50.401111    6152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:36:50.503907    6152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 22:36:50.786788    6152 round_trippers.go:574] Response Status: 200 OK in 588 milliseconds
	I0513 22:36:50.787253    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:50.787287    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0513 22:36:50.787287    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0513 22:36:50.787287    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:50 GMT
	I0513 22:36:50.787287    6152 round_trippers.go:580]     Audit-Id: dc617b90-2a67-4dc8-a989-1b74f3e74e5a
	I0513 22:36:50.787287    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:50.787287    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:50.787853    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:50.789581    6152 node_ready.go:49] node "functional-950600" has status "Ready":"True"
	I0513 22:36:50.789691    6152 node_ready.go:38] duration metric: took 591.468ms for node "functional-950600" to be "Ready" ...
	I0513 22:36:50.789799    6152 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 22:36:50.789942    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods
	I0513 22:36:50.790043    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:50.790072    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:50.790123    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:50.996825    6152 round_trippers.go:574] Response Status: 200 OK in 206 milliseconds
	I0513 22:36:50.996825    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:50.996825    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0513 22:36:50.996825    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0513 22:36:50.996825    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:50 GMT
	I0513 22:36:50.996825    6152 round_trippers.go:580]     Audit-Id: 96187221-ea5c-4419-afbd-01e683825573
	I0513 22:36:50.996825    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:50.996825    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:50.998025    6152 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tctr7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"803d730e-40db-44ae-a985-172df4c93426","resourceVersion":"431","creationTimestamp":"2024-05-13T22:36:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3d5bbe60-fc7f-46f3-8b9d-fd19541a5f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5bbe60-fc7f-46f3-8b9d-fd19541a5f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50306 chars]
	I0513 22:36:51.002917    6152 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tctr7" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.003075    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tctr7
	I0513 22:36:51.003150    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.003150    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.003150    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.201781    6152 round_trippers.go:574] Response Status: 200 OK in 198 milliseconds
	I0513 22:36:51.202320    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.202320    6152 round_trippers.go:580]     Audit-Id: 3afa01e5-628e-45ac-bc75-352d44a4f7c5
	I0513 22:36:51.202550    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.202550    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.202663    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.202663    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.202663    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.204171    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-tctr7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"803d730e-40db-44ae-a985-172df4c93426","resourceVersion":"431","creationTimestamp":"2024-05-13T22:36:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3d5bbe60-fc7f-46f3-8b9d-fd19541a5f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5bbe60-fc7f-46f3-8b9d-fd19541a5f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6239 chars]
	I0513 22:36:51.204793    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:51.204793    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.204793    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.204793    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.292181    6152 round_trippers.go:574] Response Status: 200 OK in 87 milliseconds
	I0513 22:36:51.292235    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.292235    6152 round_trippers.go:580]     Audit-Id: 20cea38f-79e8-41de-9980-d576fadc052e
	I0513 22:36:51.292235    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.292337    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.292337    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.292385    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.292385    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.292847    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:51.293261    6152 pod_ready.go:92] pod "coredns-7db6d8ff4d-tctr7" in "kube-system" namespace has status "Ready":"True"
	I0513 22:36:51.293261    6152 pod_ready.go:81] duration metric: took 290.2511ms for pod "coredns-7db6d8ff4d-tctr7" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.293261    6152 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-950600" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.293261    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/etcd-functional-950600
	I0513 22:36:51.293261    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.293261    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.293261    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.392556    6152 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I0513 22:36:51.392556    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.392686    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.392686    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.392686    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.392686    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.392686    6152 round_trippers.go:580]     Audit-Id: 016c4c94-1818-49b3-92da-c97f8727a552
	I0513 22:36:51.392775    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.393002    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-950600","namespace":"kube-system","uid":"4d364406-6af0-4436-a513-92bdd7b90e0c","resourceVersion":"325","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"1d9743e386fa70978c848291d403c78f","kubernetes.io/config.mirror":"1d9743e386fa70978c848291d403c78f","kubernetes.io/config.seen":"2024-05-13T22:36:00.306757655Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6153 chars]
	I0513 22:36:51.393599    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:51.393952    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.393952    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.393952    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.485102    6152 round_trippers.go:574] Response Status: 200 OK in 91 milliseconds
	I0513 22:36:51.485447    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.485950    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.485950    6152 round_trippers.go:580]     Audit-Id: 4ac1d919-0038-4572-8345-500e9dfcacbb
	I0513 22:36:51.485950    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.485950    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.486065    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.486281    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.486843    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:51.487585    6152 pod_ready.go:92] pod "etcd-functional-950600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:36:51.487585    6152 pod_ready.go:81] duration metric: took 194.3157ms for pod "etcd-functional-950600" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.487585    6152 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-950600" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.487769    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-950600
	I0513 22:36:51.487769    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.487769    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.487769    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.598823    6152 round_trippers.go:574] Response Status: 200 OK in 110 milliseconds
	I0513 22:36:51.598823    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.598823    6152 round_trippers.go:580]     Audit-Id: a75afec5-09bf-44b2-a232-85b8421c3893
	I0513 22:36:51.598823    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.598823    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.598823    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.598823    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.598823    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.679245    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-950600","namespace":"kube-system","uid":"147df9fd-5a20-4fc8-ace9-c5d61bbca55a","resourceVersion":"320","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"d1cffe0942dee6a504e219504e6bc04e","kubernetes.io/config.mirror":"d1cffe0942dee6a504e219504e6bc04e","kubernetes.io/config.seen":"2024-05-13T22:36:00.306827666Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8535 chars]
	I0513 22:36:51.681251    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:51.681382    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.681382    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.681382    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.691707    6152 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 22:36:51.691874    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.691874    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.691874    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.691874    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.691874    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.692001    6152 round_trippers.go:580]     Audit-Id: ec7c7b61-5b9b-4f91-a36d-a41a69d23e47
	I0513 22:36:51.692001    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.692001    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:51.692952    6152 pod_ready.go:92] pod "kube-apiserver-functional-950600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:36:51.693013    6152 pod_ready.go:81] duration metric: took 205.4202ms for pod "kube-apiserver-functional-950600" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.693013    6152 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-950600" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.693244    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-950600
	I0513 22:36:51.693304    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.693304    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.693304    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.792620    6152 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I0513 22:36:51.795988    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.795988    6152 round_trippers.go:580]     Audit-Id: db4950da-2ce7-45f8-a317-78e76d86aa2e
	I0513 22:36:51.795988    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.795988    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.795988    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.795988    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.795988    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.795988    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-950600","namespace":"kube-system","uid":"b07ed4ed-d6e7-4cb3-b2b7-d10b9ae3a582","resourceVersion":"317","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46be264c56c4fbfea546ac2e2e49dbfc","kubernetes.io/config.mirror":"46be264c56c4fbfea546ac2e2e49dbfc","kubernetes.io/config.seen":"2024-05-13T22:36:00.306830566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8110 chars]
	I0513 22:36:51.797706    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:51.797785    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.797785    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.797785    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.805130    6152 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 22:36:51.805265    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.805322    6152 round_trippers.go:580]     Audit-Id: df51e3a1-7376-461a-abb3-d6b799310a42
	I0513 22:36:51.805358    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.805392    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.805392    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.805420    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.805420    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.805420    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:51.806069    6152 pod_ready.go:92] pod "kube-controller-manager-functional-950600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:36:51.806101    6152 pod_ready.go:81] duration metric: took 113.0024ms for pod "kube-controller-manager-functional-950600" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.806208    6152 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqx72" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.806410    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-proxy-bqx72
	I0513 22:36:51.806410    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.807074    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.807074    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.881265    6152 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0513 22:36:51.881265    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.881599    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.881599    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.881599    6152 round_trippers.go:580]     Audit-Id: 2da61b1c-f9ac-4ae8-a688-39e906af1243
	I0513 22:36:51.881599    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.881599    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.881599    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.881915    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bqx72","generateName":"kube-proxy-","namespace":"kube-system","uid":"a644984a-80a5-459f-8986-72748e5f8487","resourceVersion":"398","creationTimestamp":"2024-05-13T22:36:13Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8d7d6f86-1544-4e17-8f11-dfff84de5c14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d7d6f86-1544-4e17-8f11-dfff84de5c14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5825 chars]
	I0513 22:36:51.883092    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:51.883092    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.883170    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.883211    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:51.982216    6152 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I0513 22:36:51.982216    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:51.982216    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:51.982216    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:51.982216    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:51.982216    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:51.982216    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:51 GMT
	I0513 22:36:51.982216    6152 round_trippers.go:580]     Audit-Id: 8505c453-98a1-4ece-ac47-18d5c2e41abd
	I0513 22:36:51.984741    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:51.985566    6152 pod_ready.go:92] pod "kube-proxy-bqx72" in "kube-system" namespace has status "Ready":"True"
	I0513 22:36:51.985566    6152 pod_ready.go:81] duration metric: took 179.3507ms for pod "kube-proxy-bqx72" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.985566    6152 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-950600" in "kube-system" namespace to be "Ready" ...
	I0513 22:36:51.986100    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:51.986100    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:51.986169    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:51.986169    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:52.081144    6152 round_trippers.go:574] Response Status: 200 OK in 94 milliseconds
	I0513 22:36:52.081176    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:52.081176    6152 round_trippers.go:580]     Audit-Id: b62aae26-073f-43fb-beca-fdcb404e9062
	I0513 22:36:52.081295    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:52.081295    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:52.081295    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:52.081295    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:52.081386    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:52 GMT
	I0513 22:36:52.081759    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"454","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5396 chars]
	I0513 22:36:52.082148    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:52.082702    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:52.082702    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:52.082702    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:52.106654    6152 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0513 22:36:52.106785    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:52.106817    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:52.106817    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:52 GMT
	I0513 22:36:52.106877    6152 round_trippers.go:580]     Audit-Id: befd48f6-2e66-4e7e-aa06-cfea1923c8b5
	I0513 22:36:52.106877    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:52.106877    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:52.107012    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:52.107246    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:52.492020    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:52.492096    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:52.492096    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:52.492096    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:52.499160    6152 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 22:36:52.499160    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:52.499160    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:52.499160    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:52.499160    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:52 GMT
	I0513 22:36:52.499160    6152 round_trippers.go:580]     Audit-Id: 12c48ee0-ea96-4df0-9f82-d12eddb51e9b
	I0513 22:36:52.499160    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:52.499160    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:52.499988    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"454","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5396 chars]
	I0513 22:36:52.500465    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:52.500465    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:52.501012    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:52.501012    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:52.581238    6152 round_trippers.go:574] Response Status: 200 OK in 80 milliseconds
	I0513 22:36:52.581303    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:52.581303    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:52.581394    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:52 GMT
	I0513 22:36:52.581394    6152 round_trippers.go:580]     Audit-Id: b38278ae-9234-4967-be45-f0be31866356
	I0513 22:36:52.581394    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:52.581394    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:52.581394    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:52.581394    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:52.991113    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:52.991188    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:52.991188    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:52.991188    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:52.996921    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:52.996921    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:52.996921    6152 round_trippers.go:580]     Audit-Id: f6d8fe69-d0e5-41cc-83bd-760ba64d6499
	I0513 22:36:52.996921    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:52.996921    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:52.996921    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:52.996921    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:52.996921    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:52 GMT
	I0513 22:36:52.997466    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:52.998147    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:52.998147    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:52.998205    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:52.998205    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:53.004881    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:36:53.004881    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:53.004881    6152 round_trippers.go:580]     Audit-Id: 7e1aa03a-d92a-4f85-b513-b1ecbae0ff9c
	I0513 22:36:53.004881    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:53.004881    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:53.004881    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:53.004881    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:53.004881    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:53 GMT
	I0513 22:36:53.004881    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:53.496917    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:53.497134    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:53.497199    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:53.497199    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:53.502112    6152 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:36:53.502112    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:53.502112    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:53 GMT
	I0513 22:36:53.502112    6152 round_trippers.go:580]     Audit-Id: de98dee5-8199-40f5-830a-20c29d7f501a
	I0513 22:36:53.502112    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:53.502112    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:53.502112    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:53.502112    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:53.502112    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:53.502112    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:53.502112    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:53.502112    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:53.502112    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:53.514098    6152 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 22:36:53.514098    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:53.514098    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:53.514098    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:53 GMT
	I0513 22:36:53.514098    6152 round_trippers.go:580]     Audit-Id: dc9219b1-3cc1-4c9d-94d7-7f0a6ff0a400
	I0513 22:36:53.514098    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:53.514098    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:53.514098    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:53.515594    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:53.517647    6152 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0513 22:36:53.517647    6152 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0513 22:36:53.517647    6152 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0513 22:36:53.517647    6152 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0513 22:36:53.517647    6152 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0513 22:36:53.517647    6152 command_runner.go:130] > pod/storage-provisioner configured
	I0513 22:36:53.517647    6152 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.1164048s)
	I0513 22:36:53.518205    6152 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0513 22:36:53.518275    6152 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.0142414s)
	I0513 22:36:53.518416    6152 round_trippers.go:463] GET https://127.0.0.1:52768/apis/storage.k8s.io/v1/storageclasses
	I0513 22:36:53.518486    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:53.518486    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:53.518555    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:53.580874    6152 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0513 22:36:53.580874    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:53.580874    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:53.580874    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:53.580874    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:53.580874    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:53.580874    6152 round_trippers.go:580]     Content-Length: 1273
	I0513 22:36:53.580874    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:53 GMT
	I0513 22:36:53.580874    6152 round_trippers.go:580]     Audit-Id: 51932101-3f3a-437f-8b03-9c0d5e26db38
	I0513 22:36:53.580874    6152 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"516"},"items":[{"metadata":{"name":"standard","uid":"1c73ed4c-6e59-4042-be70-84000b9a4a3b","resourceVersion":"386","creationTimestamp":"2024-05-13T22:36:16Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T22:36:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0513 22:36:53.583402    6152 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"1c73ed4c-6e59-4042-be70-84000b9a4a3b","resourceVersion":"386","creationTimestamp":"2024-05-13T22:36:16Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T22:36:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0513 22:36:53.583402    6152 round_trippers.go:463] PUT https://127.0.0.1:52768/apis/storage.k8s.io/v1/storageclasses/standard
	I0513 22:36:53.583928    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:53.584077    6152 round_trippers.go:473]     Content-Type: application/json
	I0513 22:36:53.584572    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:53.584926    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:53.594092    6152 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 22:36:53.594092    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:53.594092    6152 round_trippers.go:580]     Audit-Id: 659de8da-671e-4f44-bbc2-9d363c1ebbd9
	I0513 22:36:53.594092    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:53.594092    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:53.594092    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:53.594092    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:53.594092    6152 round_trippers.go:580]     Content-Length: 1220
	I0513 22:36:53.594092    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:53 GMT
	I0513 22:36:53.594092    6152 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"1c73ed4c-6e59-4042-be70-84000b9a4a3b","resourceVersion":"386","creationTimestamp":"2024-05-13T22:36:16Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T22:36:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0513 22:36:53.598352    6152 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0513 22:36:53.600453    6152 addons.go:505] duration metric: took 4.1887166s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0513 22:36:54.000875    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:54.000950    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:54.000950    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:54.000950    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:54.006365    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:54.006365    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:54.006365    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:54.006365    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:54 GMT
	I0513 22:36:54.006365    6152 round_trippers.go:580]     Audit-Id: 566f2334-403d-4afa-bc41-d653510d7b5f
	I0513 22:36:54.006365    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:54.006365    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:54.006365    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:54.006365    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:54.007079    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:54.007079    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:54.007079    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:54.007079    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:54.012804    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:54.012804    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:54.012804    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:54.012804    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:54.012804    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:54 GMT
	I0513 22:36:54.012804    6152 round_trippers.go:580]     Audit-Id: 192cc7b1-41ee-4bd2-aac8-15d3acd82c13
	I0513 22:36:54.012804    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:54.012804    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:54.012804    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:54.013530    6152 pod_ready.go:102] pod "kube-scheduler-functional-950600" in "kube-system" namespace has status "Ready":"False"
	I0513 22:36:54.498764    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:54.498838    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:54.498838    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:54.498895    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:54.504002    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:54.504002    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:54.504002    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:54.504002    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:54.504002    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:54.504002    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:54 GMT
	I0513 22:36:54.504002    6152 round_trippers.go:580]     Audit-Id: 0fae2ff0-b4c4-4c40-881d-837bd4f4349c
	I0513 22:36:54.504002    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:54.504987    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:54.505827    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:54.505827    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:54.505827    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:54.505827    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:54.512009    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:36:54.512009    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:54.512009    6152 round_trippers.go:580]     Audit-Id: 63913194-ed94-4075-a8db-a5cb70e942a2
	I0513 22:36:54.512009    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:54.512009    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:54.512009    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:54.512009    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:54.512009    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:54 GMT
	I0513 22:36:54.512960    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:54.998749    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:54.998749    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:54.998749    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:54.998749    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:55.003838    6152 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:36:55.004001    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:55.004001    6152 round_trippers.go:580]     Audit-Id: 4a12715a-5999-4c73-8105-c13dd3922ed7
	I0513 22:36:55.004001    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:55.004001    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:55.004094    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:55.004094    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:55.004094    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:55 GMT
	I0513 22:36:55.005041    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:55.005336    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:55.005336    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:55.005336    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:55.005336    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:55.014545    6152 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 22:36:55.014545    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:55.014747    6152 round_trippers.go:580]     Audit-Id: 70052d40-0fdb-4223-821e-fcb16857cea8
	I0513 22:36:55.014786    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:55.014786    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:55.014786    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:55.014830    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:55.014830    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:55 GMT
	I0513 22:36:55.014969    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:55.500985    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:55.501066    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:55.501066    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:55.501066    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:55.506992    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:55.507093    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:55.507093    6152 round_trippers.go:580]     Audit-Id: 292e6d89-d93b-4297-bf5b-a54884735739
	I0513 22:36:55.507093    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:55.507093    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:55.507093    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:55.507164    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:55.507164    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:55 GMT
	I0513 22:36:55.507352    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:55.507975    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:55.507975    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:55.507975    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:55.507975    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:55.513785    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:55.513785    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:55.514331    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:55.514331    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:55.514331    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:55.514331    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:55.514331    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:55 GMT
	I0513 22:36:55.514331    6152 round_trippers.go:580]     Audit-Id: d8cea1f8-cb81-4961-b842-56564322b5f9
	I0513 22:36:55.514596    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:55.996365    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:55.996365    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:55.996365    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:55.996365    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:56.002070    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:56.002126    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:56.002163    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:56.002163    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:56.002163    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:56 GMT
	I0513 22:36:56.002163    6152 round_trippers.go:580]     Audit-Id: 2f32d9e7-5620-4bdd-a79a-fe2a9d1afcd9
	I0513 22:36:56.002163    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:56.002163    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:56.002219    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:56.003012    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:56.003093    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:56.003093    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:56.003167    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:56.008880    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:56.009050    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:56.009050    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:56.009050    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:56.009050    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:56 GMT
	I0513 22:36:56.009050    6152 round_trippers.go:580]     Audit-Id: aecbd2a8-a124-40ee-9a58-1b97d609edaa
	I0513 22:36:56.009050    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:56.009050    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:56.009050    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:56.494639    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:56.494812    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:56.494812    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:56.494812    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:56.500518    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:56.500557    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:56.500557    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:56.500557    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:56.500557    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:56.500557    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:56.500557    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:56 GMT
	I0513 22:36:56.500557    6152 round_trippers.go:580]     Audit-Id: bd60adf5-e162-4ea7-b87a-114897ecd795
	I0513 22:36:56.500986    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:56.501300    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:56.501300    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:56.501300    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:56.501300    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:56.507297    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:56.507297    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:56.507297    6152 round_trippers.go:580]     Audit-Id: e6b0bf79-25c3-40fb-99e7-a328a7021814
	I0513 22:36:56.507297    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:56.507297    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:56.507297    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:56.507297    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:56.507297    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:56 GMT
	I0513 22:36:56.507297    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:56.509374    6152 pod_ready.go:102] pod "kube-scheduler-functional-950600" in "kube-system" namespace has status "Ready":"False"
	I0513 22:36:56.993011    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:56.993011    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:56.993011    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:56.993087    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:56.998347    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:56.998347    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:56.998439    6152 round_trippers.go:580]     Audit-Id: b23dffdb-6223-47ad-8b09-1584100dc634
	I0513 22:36:56.998439    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:56.998439    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:56.998439    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:56.998439    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:56.998439    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:56 GMT
	I0513 22:36:56.998791    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:56.999464    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:56.999464    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:56.999464    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:56.999464    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:57.005987    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:36:57.005987    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:57.005987    6152 round_trippers.go:580]     Audit-Id: cf9ab596-e203-4f72-9d95-c25b2bb2c2bc
	I0513 22:36:57.005987    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:57.005987    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:57.005987    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:57.005987    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:57.005987    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:57 GMT
	I0513 22:36:57.006749    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:57.488755    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:57.488999    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:57.488999    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:57.488999    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:57.494607    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:57.494607    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:57.494607    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:57.494607    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:57.494607    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:57 GMT
	I0513 22:36:57.494607    6152 round_trippers.go:580]     Audit-Id: aca2a026-c1d0-4c95-81df-28eadea6205d
	I0513 22:36:57.494607    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:57.494607    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:57.494607    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:57.495597    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:57.495597    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:57.495597    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:57.495597    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:57.501852    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:36:57.501852    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:57.501852    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:57 GMT
	I0513 22:36:57.501852    6152 round_trippers.go:580]     Audit-Id: b28f8865-e578-4a42-b699-989bfba24fd6
	I0513 22:36:57.501852    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:57.501852    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:57.501852    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:57.501852    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:57.501852    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:57.990238    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:57.990295    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:57.990359    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:57.990379    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:57.995169    6152 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:36:57.995169    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:57.995169    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:57.995169    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:57.995169    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:57.995169    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:57 GMT
	I0513 22:36:57.995169    6152 round_trippers.go:580]     Audit-Id: 5750a160-c9a2-4dad-a296-cfa4c153a440
	I0513 22:36:57.995169    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:57.995934    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:57.996193    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:57.996193    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:57.996193    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:57.996193    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:58.001319    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:58.001319    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:58.001319    6152 round_trippers.go:580]     Audit-Id: 407c4a5d-9956-401b-948c-031941c26282
	I0513 22:36:58.001319    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:58.001319    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:58.001319    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:58.001319    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:58.001319    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:58 GMT
	I0513 22:36:58.001969    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:58.493830    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:58.493903    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:58.493903    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:58.493903    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:58.500477    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:36:58.500477    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:58.500477    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:58 GMT
	I0513 22:36:58.500477    6152 round_trippers.go:580]     Audit-Id: e0fd2d4b-4c10-40b3-aa8c-c497a7dfc0bb
	I0513 22:36:58.500477    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:58.500477    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:58.501029    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:58.501029    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:58.501309    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:58.501962    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:58.501962    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:58.502015    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:58.502015    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:58.508609    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:36:58.508609    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:58.508609    6152 round_trippers.go:580]     Audit-Id: ccde37d2-1cd4-4bd4-a34a-68791899ebb5
	I0513 22:36:58.508609    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:58.508609    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:58.508609    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:58.508609    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:58.508609    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:58 GMT
	I0513 22:36:58.509251    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:58.509892    6152 pod_ready.go:102] pod "kube-scheduler-functional-950600" in "kube-system" namespace has status "Ready":"False"
	I0513 22:36:58.993474    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:58.993474    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:58.993571    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:58.993571    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:58.998683    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:58.998683    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:58.998683    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:58.998683    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:58.998683    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:58.998683    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:58 GMT
	I0513 22:36:58.998683    6152 round_trippers.go:580]     Audit-Id: a5604aac-3a2a-4c73-945b-f23df38a70a8
	I0513 22:36:58.998683    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:58.999218    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:58.999768    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:58.999768    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:58.999768    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:58.999768    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:59.005904    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:36:59.005949    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:59.005949    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:59 GMT
	I0513 22:36:59.005949    6152 round_trippers.go:580]     Audit-Id: 5c9f7da0-808c-45b9-83bf-859e7f2c1b22
	I0513 22:36:59.005949    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:59.005949    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:59.005949    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:59.005949    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:59.006179    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:59.495211    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:59.495270    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:59.495270    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:59.495270    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:59.505199    6152 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 22:36:59.505199    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:59.505199    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:59 GMT
	I0513 22:36:59.505199    6152 round_trippers.go:580]     Audit-Id: 69172a8f-b311-4f1c-bc04-cfc045c083b9
	I0513 22:36:59.505199    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:59.505199    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:59.505199    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:59.505199    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:59.505869    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:59.506541    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:59.506541    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:59.506541    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:59.506541    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:59.511392    6152 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:36:59.511443    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:59.511443    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:59.511443    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:59.511443    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:59 GMT
	I0513 22:36:59.511443    6152 round_trippers.go:580]     Audit-Id: d3b38d20-e44f-4ec1-8409-6926cf9568c3
	I0513 22:36:59.511443    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:59.511443    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:59.511443    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:36:59.991474    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:36:59.991556    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:59.991556    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:36:59.991556    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:59.997573    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:36:59.997679    6152 round_trippers.go:577] Response Headers:
	I0513 22:36:59.997679    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:36:59.997679    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:36:59.997679    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:36:59.997679    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:36:59.997679    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:36:59 GMT
	I0513 22:36:59.997679    6152 round_trippers.go:580]     Audit-Id: 75a13ae3-c637-450d-9d76-0c8c2c358f8f
	I0513 22:36:59.998300    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:36:59.999258    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:36:59.999296    6152 round_trippers.go:469] Request Headers:
	I0513 22:36:59.999296    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:36:59.999296    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:00.010716    6152 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 22:37:00.010716    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:00.010716    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:00 GMT
	I0513 22:37:00.010716    6152 round_trippers.go:580]     Audit-Id: ce03cb12-3c2d-4e7c-a927-9006cb64c9fb
	I0513 22:37:00.010716    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:00.010716    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:00.010716    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:00.010716    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:00.011558    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:37:00.488152    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:37:00.488382    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:00.488382    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:00.488382    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:00.494626    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:37:00.494710    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:00.494710    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:00.494798    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:00 GMT
	I0513 22:37:00.494964    6152 round_trippers.go:580]     Audit-Id: a5ea1cab-4fa3-4140-b217-7e5d062ecf2f
	I0513 22:37:00.495013    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:00.495088    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:00.495088    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:00.495811    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:37:00.496028    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:37:00.496028    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:00.496028    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:00.496028    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:00.502184    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:37:00.502184    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:00.502184    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:00.502184    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:00.502184    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:00.502184    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:00.502184    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:00 GMT
	I0513 22:37:00.502184    6152 round_trippers.go:580]     Audit-Id: 8c803670-98ae-450e-84ef-ae2bf8018e20
	I0513 22:37:00.502715    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:37:00.987426    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:37:00.987426    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:00.987426    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:00.987426    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:00.993625    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:37:00.993775    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:00.993775    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:00 GMT
	I0513 22:37:00.993775    6152 round_trippers.go:580]     Audit-Id: f1d33549-26f3-4a63-af4c-8be7e176c83c
	I0513 22:37:00.993775    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:00.993775    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:00.993775    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:00.993775    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:00.993886    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:37:00.994474    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:37:00.994558    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:00.994558    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:00.994558    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:01.000144    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:37:01.000144    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:01.000144    6152 round_trippers.go:580]     Audit-Id: 898a68e1-270b-4529-819d-92f604dc335e
	I0513 22:37:01.000144    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:01.000144    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:01.000144    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:01.000144    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:01.000144    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:01 GMT
	I0513 22:37:01.000144    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:37:01.000824    6152 pod_ready.go:102] pod "kube-scheduler-functional-950600" in "kube-system" namespace has status "Ready":"False"
	I0513 22:37:01.486986    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:37:01.486986    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:01.486986    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:01.486986    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:01.491832    6152 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:37:01.491832    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:01.491832    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:01.491832    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:01 GMT
	I0513 22:37:01.491832    6152 round_trippers.go:580]     Audit-Id: 075f74b9-c5fe-477d-8b0a-5760fa1418d4
	I0513 22:37:01.491832    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:01.491832    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:01.491832    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:01.492731    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:37:01.493686    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:37:01.493728    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:01.493728    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:01.493728    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:01.500667    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:37:01.501107    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:01.501107    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:01.501107    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:01.501107    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:01.501186    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:01.501243    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:01 GMT
	I0513 22:37:01.501243    6152 round_trippers.go:580]     Audit-Id: a4e73174-0a81-441b-b3a8-83b7b2ae9847
	I0513 22:37:01.501428    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:37:01.999835    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:37:02.000055    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:02.000055    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:02.000055    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:02.004228    6152 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:37:02.004228    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:02.004228    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:02.004228    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:02.004228    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:02.004228    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:02 GMT
	I0513 22:37:02.004228    6152 round_trippers.go:580]     Audit-Id: 288f6ea4-06ff-486b-9c1d-67d31d73c589
	I0513 22:37:02.004228    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:02.004797    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"483","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0513 22:37:02.005168    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:37:02.005168    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:02.005168    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:02.005168    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:02.010287    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:37:02.010287    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:02.010287    6152 round_trippers.go:580]     Audit-Id: 75d8a057-9a46-46ca-ba01-2041c7961c74
	I0513 22:37:02.010287    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:02.010287    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:02.010287    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:02.010287    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:02.010287    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:02 GMT
	I0513 22:37:02.010822    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:37:02.496742    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:37:02.496742    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:02.496742    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:02.496742    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:02.502977    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:37:02.503038    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:02.503038    6152 round_trippers.go:580]     Audit-Id: 418eaa58-23bc-4102-921d-4ae6c6b1409c
	I0513 22:37:02.503038    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:02.503073    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:02.503073    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:02.503073    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:02.503073    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:02 GMT
	I0513 22:37:02.503194    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"527","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5440 chars]
	I0513 22:37:02.503828    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:37:02.503925    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:02.503925    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:02.504079    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:02.509812    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:37:02.509812    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:02.509812    6152 round_trippers.go:580]     Audit-Id: 9c744603-8868-4fc1-8eed-604a227efdf6
	I0513 22:37:02.509812    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:02.509812    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:02.509812    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:02.509812    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:02.509812    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:02 GMT
	I0513 22:37:02.509812    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:37:02.994989    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600
	I0513 22:37:02.994989    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:02.995101    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:02.995101    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:03.000463    6152 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:37:03.000463    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:03.000463    6152 round_trippers.go:580]     Audit-Id: 92bfcd95-d547-445b-bacd-787934ba2837
	I0513 22:37:03.000463    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:03.000463    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:03.000463    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:03.000463    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:03.000463    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:03 GMT
	I0513 22:37:03.001296    6152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-950600","namespace":"kube-system","uid":"b8edaae9-85e1-437c-b198-7fb642809b58","resourceVersion":"528","creationTimestamp":"2024-05-13T22:36:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.mirror":"cb9da9f07e863e25e41a37ea3f02d3c6","kubernetes.io/config.seen":"2024-05-13T22:36:00.306831866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0513 22:37:03.001353    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes/functional-950600
	I0513 22:37:03.001353    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:03.001353    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:03.001353    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:03.007818    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:37:03.007818    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:03.007818    6152 round_trippers.go:580]     Audit-Id: 7aaea4b1-ef01-43ad-b62f-26010f7d0319
	I0513 22:37:03.007818    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:03.007818    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:03.007818    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:03.007818    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:03.007818    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:03 GMT
	I0513 22:37:03.007818    6152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:35:56Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0513 22:37:03.008683    6152 pod_ready.go:92] pod "kube-scheduler-functional-950600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:37:03.008740    6152 pod_ready.go:81] duration metric: took 11.0227094s for pod "kube-scheduler-functional-950600" in "kube-system" namespace to be "Ready" ...
	I0513 22:37:03.008803    6152 pod_ready.go:38] duration metric: took 12.2184532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 22:37:03.008866    6152 api_server.go:52] waiting for apiserver process to appear ...
	I0513 22:37:03.024258    6152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:37:03.046030    6152 command_runner.go:130] > 5474
	I0513 22:37:03.046030    6152 api_server.go:72] duration metric: took 13.6337811s to wait for apiserver process to appear ...
	I0513 22:37:03.046030    6152 api_server.go:88] waiting for apiserver healthz status ...
	I0513 22:37:03.046870    6152 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52768/healthz ...
	I0513 22:37:03.058446    6152 api_server.go:279] https://127.0.0.1:52768/healthz returned 200:
	ok
	I0513 22:37:03.058553    6152 round_trippers.go:463] GET https://127.0.0.1:52768/version
	I0513 22:37:03.058553    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:03.058553    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:03.058553    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:03.061819    6152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 22:37:03.061819    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:03.061819    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:03.061819    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:03.061819    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:03.061819    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:03.061819    6152 round_trippers.go:580]     Content-Length: 263
	I0513 22:37:03.061819    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:03 GMT
	I0513 22:37:03.061819    6152 round_trippers.go:580]     Audit-Id: 4e0cf8e9-a96a-43a3-a92d-b93b057f9405
	I0513 22:37:03.061819    6152 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0513 22:37:03.061819    6152 api_server.go:141] control plane version: v1.30.0
	I0513 22:37:03.061819    6152 api_server.go:131] duration metric: took 15.7875ms to wait for apiserver health ...
	I0513 22:37:03.061819    6152 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 22:37:03.061819    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods
	I0513 22:37:03.061819    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:03.061819    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:03.061819    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:03.069265    6152 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 22:37:03.069265    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:03.069331    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:03.069331    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:03 GMT
	I0513 22:37:03.069331    6152 round_trippers.go:580]     Audit-Id: 62ab527f-6191-472c-ade4-f976934543f0
	I0513 22:37:03.069331    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:03.069368    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:03.069368    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:03.070227    6152 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"528"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tctr7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"803d730e-40db-44ae-a985-172df4c93426","resourceVersion":"518","creationTimestamp":"2024-05-13T22:36:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3d5bbe60-fc7f-46f3-8b9d-fd19541a5f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5bbe60-fc7f-46f3-8b9d-fd19541a5f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51751 chars]
	I0513 22:37:03.072514    6152 system_pods.go:59] 7 kube-system pods found
	I0513 22:37:03.072652    6152 system_pods.go:61] "coredns-7db6d8ff4d-tctr7" [803d730e-40db-44ae-a985-172df4c93426] Running
	I0513 22:37:03.072652    6152 system_pods.go:61] "etcd-functional-950600" [4d364406-6af0-4436-a513-92bdd7b90e0c] Running
	I0513 22:37:03.072652    6152 system_pods.go:61] "kube-apiserver-functional-950600" [147df9fd-5a20-4fc8-ace9-c5d61bbca55a] Running
	I0513 22:37:03.072652    6152 system_pods.go:61] "kube-controller-manager-functional-950600" [b07ed4ed-d6e7-4cb3-b2b7-d10b9ae3a582] Running
	I0513 22:37:03.072652    6152 system_pods.go:61] "kube-proxy-bqx72" [a644984a-80a5-459f-8986-72748e5f8487] Running
	I0513 22:37:03.072652    6152 system_pods.go:61] "kube-scheduler-functional-950600" [b8edaae9-85e1-437c-b198-7fb642809b58] Running
	I0513 22:37:03.072652    6152 system_pods.go:61] "storage-provisioner" [197b8324-4017-4bce-9f03-e06f5620dfe4] Running
	I0513 22:37:03.072652    6152 system_pods.go:74] duration metric: took 10.8334ms to wait for pod list to return data ...
	I0513 22:37:03.072652    6152 default_sa.go:34] waiting for default service account to be created ...
	I0513 22:37:03.072833    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/default/serviceaccounts
	I0513 22:37:03.072833    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:03.072833    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:03.072833    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:03.076688    6152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 22:37:03.076688    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:03.076688    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:03.076688    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:03.076688    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:03.076688    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:03.076688    6152 round_trippers.go:580]     Content-Length: 261
	I0513 22:37:03.076688    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:03 GMT
	I0513 22:37:03.076688    6152 round_trippers.go:580]     Audit-Id: 9e0c1792-810a-4c75-8b59-992ac7342615
	I0513 22:37:03.076688    6152 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"528"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"bedbd7f3-be83-44d2-bf9c-86502559518d","resourceVersion":"372","creationTimestamp":"2024-05-13T22:36:14Z"}}]}
	I0513 22:37:03.076688    6152 default_sa.go:45] found service account: "default"
	I0513 22:37:03.077218    6152 default_sa.go:55] duration metric: took 4.5649ms for default service account to be created ...
	I0513 22:37:03.077218    6152 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 22:37:03.077388    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/namespaces/kube-system/pods
	I0513 22:37:03.077388    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:03.077388    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:03.077388    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:03.085218    6152 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 22:37:03.085218    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:03.085218    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:03 GMT
	I0513 22:37:03.085218    6152 round_trippers.go:580]     Audit-Id: 9561d52b-03bf-485b-989c-3118d79e3ebc
	I0513 22:37:03.085218    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:03.085218    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:03.085218    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:03.085218    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:03.087292    6152 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"528"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-tctr7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"803d730e-40db-44ae-a985-172df4c93426","resourceVersion":"518","creationTimestamp":"2024-05-13T22:36:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"3d5bbe60-fc7f-46f3-8b9d-fd19541a5f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:36:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5bbe60-fc7f-46f3-8b9d-fd19541a5f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51751 chars]
	I0513 22:37:03.090006    6152 system_pods.go:86] 7 kube-system pods found
	I0513 22:37:03.090006    6152 system_pods.go:89] "coredns-7db6d8ff4d-tctr7" [803d730e-40db-44ae-a985-172df4c93426] Running
	I0513 22:37:03.090006    6152 system_pods.go:89] "etcd-functional-950600" [4d364406-6af0-4436-a513-92bdd7b90e0c] Running
	I0513 22:37:03.090006    6152 system_pods.go:89] "kube-apiserver-functional-950600" [147df9fd-5a20-4fc8-ace9-c5d61bbca55a] Running
	I0513 22:37:03.090006    6152 system_pods.go:89] "kube-controller-manager-functional-950600" [b07ed4ed-d6e7-4cb3-b2b7-d10b9ae3a582] Running
	I0513 22:37:03.090006    6152 system_pods.go:89] "kube-proxy-bqx72" [a644984a-80a5-459f-8986-72748e5f8487] Running
	I0513 22:37:03.090006    6152 system_pods.go:89] "kube-scheduler-functional-950600" [b8edaae9-85e1-437c-b198-7fb642809b58] Running
	I0513 22:37:03.090006    6152 system_pods.go:89] "storage-provisioner" [197b8324-4017-4bce-9f03-e06f5620dfe4] Running
	I0513 22:37:03.090006    6152 system_pods.go:126] duration metric: took 12.69ms to wait for k8s-apps to be running ...
	I0513 22:37:03.090006    6152 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 22:37:03.102321    6152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 22:37:03.125097    6152 system_svc.go:56] duration metric: took 35.0894ms WaitForService to wait for kubelet
	I0513 22:37:03.125097    6152 kubeadm.go:576] duration metric: took 13.7128439s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 22:37:03.125638    6152 node_conditions.go:102] verifying NodePressure condition ...
	I0513 22:37:03.125759    6152 round_trippers.go:463] GET https://127.0.0.1:52768/api/v1/nodes
	I0513 22:37:03.125759    6152 round_trippers.go:469] Request Headers:
	I0513 22:37:03.125807    6152 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:37:03.125807    6152 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:37:03.132061    6152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:37:03.132061    6152 round_trippers.go:577] Response Headers:
	I0513 22:37:03.132061    6152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0394dc42-a5e9-4ced-8354-a85aa6209bba
	I0513 22:37:03.132061    6152 round_trippers.go:580]     Date: Mon, 13 May 2024 22:37:03 GMT
	I0513 22:37:03.132061    6152 round_trippers.go:580]     Audit-Id: 9c5ea537-f12b-4e7a-bfaf-786fe25eff09
	I0513 22:37:03.132061    6152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:37:03.132061    6152 round_trippers.go:580]     Content-Type: application/json
	I0513 22:37:03.132061    6152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 12e81868-a6f6-4e5d-99de-de1192169fca
	I0513 22:37:03.132940    6152 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"528"},"items":[{"metadata":{"name":"functional-950600","uid":"902c7eb1-02e3-4b23-8223-42166b3328a7","resourceVersion":"436","creationTimestamp":"2024-05-13T22:35:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-950600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-950600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_36_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4908 chars]
	I0513 22:37:03.133349    6152 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0513 22:37:03.133434    6152 node_conditions.go:123] node cpu capacity is 16
	I0513 22:37:03.133518    6152 node_conditions.go:105] duration metric: took 7.8797ms to run NodePressure ...
	I0513 22:37:03.133606    6152 start.go:240] waiting for startup goroutines ...
	I0513 22:37:03.133634    6152 start.go:245] waiting for cluster config update ...
	I0513 22:37:03.133689    6152 start.go:254] writing updated cluster config ...
	I0513 22:37:03.148353    6152 ssh_runner.go:195] Run: rm -f paused
	I0513 22:37:03.278015    6152 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0513 22:37:03.281266    6152 out.go:177] * Done! kubectl is now configured to use "functional-950600" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 13 22:36:41 functional-950600 cri-dockerd[4890]: time="2024-05-13T22:36:41Z" level=info msg="Start cri-dockerd grpc backend"
	May 13 22:36:41 functional-950600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	May 13 22:36:41 functional-950600 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	May 13 22:36:41 functional-950600 systemd[1]: cri-docker.service: Deactivated successfully.
	May 13 22:36:41 functional-950600 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	May 13 22:36:41 functional-950600 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Start docker client with request timeout 0s"
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Hairpin mode is set to hairpin-veth"
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Loaded network plugin cni"
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Docker cri networking managed by network plugin cni"
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Setting cgroupDriver cgroupfs"
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	May 13 22:36:41 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:41Z" level=info msg="Start cri-dockerd grpc backend"
	May 13 22:36:41 functional-950600 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	May 13 22:36:42 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:42Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-tctr7_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2c8f77e2e5435d489e234af9dab2e531605f27b68c367125ca338a14168c3a59\""
	May 13 22:36:42 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e44fc8d3cc79f7cc9f42de68a2318ccbc44122e2e69a89f02e3630ca0532d23a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	May 13 22:36:42 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac942390c99077e364a985ce33a762cf37c3b1e92611ce3d1b836f36509b9210/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	May 13 22:36:43 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f13b30805ec886c886a33f67c5f4f376c8bbeae0aa48bc4668e1e8324bd3d64d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	May 13 22:36:43 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b0cb8ff194698cc049dda5e623ded078e4dbe16d98b6517267cf7bfbd40fa1d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	May 13 22:36:43 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9c250e604bbeaf5e327af9b7b025f3940254290503f24a4047110b13e73864d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	May 13 22:36:44 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b09fdd16ccc73d20d3cf82d0b90b074b1c156b8ff267b17ca809b88d984431/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	May 13 22:36:46 functional-950600 cri-dockerd[4995]: time="2024-05-13T22:36:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89d778a133d2d7a87be8d3bd080fcb40c4b82e11d3a8a9499cecadd8982d5cce/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae3bc77db96bf       c7aad43836fa5       39 seconds ago       Running             kube-controller-manager   1                   89d778a133d2d       kube-controller-manager-functional-950600
	926fe4cb3b3ee       cbb01a7bd410d       41 seconds ago       Running             coredns                   1                   89b09fdd16ccc       coredns-7db6d8ff4d-tctr7
	91c97ddd7812b       6e38f40d628db       42 seconds ago       Running             storage-provisioner       1                   b9c250e604bbe       storage-provisioner
	76ca5379b8c8e       259c8277fcbbc       42 seconds ago       Running             kube-scheduler            1                   f13b30805ec88       kube-scheduler-functional-950600
	c4096ef9afcaa       a0bf559e280cf       42 seconds ago       Running             kube-proxy                1                   9b0cb8ff19469       kube-proxy-bqx72
	0d1377b26da34       3861cfcd7c04c       42 seconds ago       Running             etcd                      1                   ac942390c9907       etcd-functional-950600
	33c0d5909d464       c42f13656d0b2       43 seconds ago       Running             kube-apiserver            1                   e44fc8d3cc79f       kube-apiserver-functional-950600
	506e9dfb68354       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   8e423681f7293       storage-provisioner
	c8489f37ec5ab       cbb01a7bd410d       About a minute ago   Exited              coredns                   0                   2c8f77e2e5435       coredns-7db6d8ff4d-tctr7
	59902d194998e       a0bf559e280cf       About a minute ago   Exited              kube-proxy                0                   885ebf049b6b9       kube-proxy-bqx72
	e522f513430b1       259c8277fcbbc       About a minute ago   Exited              kube-scheduler            0                   f4b6ee88f5b4d       kube-scheduler-functional-950600
	e4fd8c86bfda4       3861cfcd7c04c       About a minute ago   Exited              etcd                      0                   0f34dd5743718       etcd-functional-950600
	7fd3055e4bf5f       c7aad43836fa5       About a minute ago   Exited              kube-controller-manager   0                   ef32205d2faad       kube-controller-manager-functional-950600
	7a9a4df7fa867       c42f13656d0b2       About a minute ago   Exited              kube-apiserver            0                   627820d671285       kube-apiserver-functional-950600
	
	
	==> coredns [926fe4cb3b3e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40670 - 10563 "HINFO IN 4187725280841473488.6554862083349300460. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.099542139s
	
	
	==> coredns [c8489f37ec5a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-950600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-950600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=functional-950600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T22_36_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 22:35:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-950600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 22:37:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 22:36:21 +0000   Mon, 13 May 2024 22:35:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 22:36:21 +0000   Mon, 13 May 2024 22:35:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 22:36:21 +0000   Mon, 13 May 2024 22:35:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 22:36:21 +0000   Mon, 13 May 2024 22:36:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-950600
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f0de3bbedd14550a903c45f8c124fc8
	  System UUID:                3f0de3bbedd14550a903c45f8c124fc8
	  Boot ID:                    e642bd6d-2f44-4251-bc65-a922b73ecc4a
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-tctr7                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     71s
	  kube-system                 etcd-functional-950600                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         85s
	  kube-system                 kube-apiserver-functional-950600             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-controller-manager-functional-950600    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-bqx72                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-functional-950600             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  Starting                 34s                kube-proxy       
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s (x8 over 94s)  kubelet          Node functional-950600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x8 over 94s)  kubelet          Node functional-950600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x7 over 94s)  kubelet          Node functional-950600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s                kubelet          Node functional-950600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s                kubelet          Node functional-950600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s                kubelet          Node functional-950600 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             85s                kubelet          Node functional-950600 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeReady                75s                kubelet          Node functional-950600 status is now: NodeReady
	  Normal  RegisteredNode           72s                node-controller  Node functional-950600 event: Registered Node functional-950600 in Controller
	  Normal  RegisteredNode           20s                node-controller  Node functional-950600 event: Registered Node functional-950600 in Controller
	
	
	==> dmesg <==
	[  +0.000006]  failed 2
	[  +0.012111] FS-Cache: Duplicate cookie detected
	[  +0.000990] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.001142] FS-Cache: O-cookie d=00000000bcc88af5{9P.session} n=000000008020aead
	[  +0.001578] FS-Cache: O-key=[10] '34323934393337353834'
	[  +0.000877] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.001169] FS-Cache: N-cookie d=00000000bcc88af5{9P.session} n=000000006d7cf6b0
	[  +0.001638] FS-Cache: N-key=[10] '34323934393337353834'
	[  +0.004887] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.451939] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[May13 20:57] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002348] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002636] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.003815] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.007841] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001683] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.003844] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001821] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.064851] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.119279] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.792289] netlink: 'init': attribute type 4 has an invalid length.
	
	
	==> etcd [0d1377b26da3] <==
	{"level":"info","ts":"2024-05-13T22:36:45.705427Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-13T22:36:45.777959Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-13T22:36:45.778458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-13T22:36:45.778658Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-13T22:36:45.780844Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-13T22:36:45.786333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-13T22:36:51.096216Z","caller":"traceutil/trace.go:171","msg":"trace[1165141931] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"111.932086ms","start":"2024-05-13T22:36:50.984267Z","end":"2024-05-13T22:36:51.096199Z","steps":["trace[1165141931] 'process raft request'  (duration: 111.680449ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T22:36:51.195931Z","caller":"traceutil/trace.go:171","msg":"trace[1627832689] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:465; }","duration":"210.365061ms","start":"2024-05-13T22:36:50.985543Z","end":"2024-05-13T22:36:51.195908Z","steps":["trace[1627832689] 'read index received'  (duration: 110.427762ms)","trace[1627832689] 'applied index is now lower than readState.Index'  (duration: 99.936298ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T22:36:51.196217Z","caller":"traceutil/trace.go:171","msg":"trace[1095188563] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"210.02801ms","start":"2024-05-13T22:36:50.986173Z","end":"2024-05-13T22:36:51.196201Z","steps":["trace[1095188563] 'process raft request'  (duration: 209.609948ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:36:51.196412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.338957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:4085"}
	{"level":"info","ts":"2024-05-13T22:36:51.196491Z","caller":"traceutil/trace.go:171","msg":"trace[1636309786] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:450; }","duration":"210.485478ms","start":"2024-05-13T22:36:50.98599Z","end":"2024-05-13T22:36:51.196475Z","steps":["trace[1636309786] 'agreement among raft nodes before linearized reading'  (duration: 210.249843ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:36:51.196599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.167717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/functional-950600\" ","response":"range_response_count:1 size:571"}
	{"level":"info","ts":"2024-05-13T22:36:51.196649Z","caller":"traceutil/trace.go:171","msg":"trace[1284272532] range","detail":"{range_begin:/registry/leases/kube-node-lease/functional-950600; range_end:; response_count:1; response_revision:450; }","duration":"116.239229ms","start":"2024-05-13T22:36:51.080396Z","end":"2024-05-13T22:36:51.196635Z","steps":["trace[1284272532] 'agreement among raft nodes before linearized reading'  (duration: 116.155616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:36:51.196657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.549788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-bqx72\" ","response":"range_response_count:1 size:4648"}
	{"level":"info","ts":"2024-05-13T22:36:51.196691Z","caller":"traceutil/trace.go:171","msg":"trace[272541066] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-bqx72; range_end:; response_count:1; response_revision:450; }","duration":"210.596795ms","start":"2024-05-13T22:36:50.986084Z","end":"2024-05-13T22:36:51.196681Z","steps":["trace[272541066] 'agreement among raft nodes before linearized reading'  (duration: 210.543187ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:36:51.196607Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.902776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-05-13T22:36:51.196801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.240529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-tctr7\" ","response":"range_response_count:1 size:4790"}
	{"level":"info","ts":"2024-05-13T22:36:51.19685Z","caller":"traceutil/trace.go:171","msg":"trace[2020969933] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-tctr7; range_end:; response_count:1; response_revision:450; }","duration":"116.300337ms","start":"2024-05-13T22:36:51.080532Z","end":"2024-05-13T22:36:51.196833Z","steps":["trace[2020969933] 'agreement among raft nodes before linearized reading'  (duration: 116.228327ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T22:36:51.196808Z","caller":"traceutil/trace.go:171","msg":"trace[995204925] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:450; }","duration":"118.129011ms","start":"2024-05-13T22:36:51.078665Z","end":"2024-05-13T22:36:51.196794Z","steps":["trace[995204925] 'agreement among raft nodes before linearized reading'  (duration: 117.914579ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:36:51.196962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.411416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2024-05-13T22:36:51.196997Z","caller":"traceutil/trace.go:171","msg":"trace[182333949] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:450; }","duration":"211.462924ms","start":"2024-05-13T22:36:50.985523Z","end":"2024-05-13T22:36:51.196986Z","steps":["trace[182333949] 'agreement among raft nodes before linearized reading'  (duration: 211.401115ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T22:36:51.595615Z","caller":"traceutil/trace.go:171","msg":"trace[701588623] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:473; }","duration":"101.764171ms","start":"2024-05-13T22:36:51.493833Z","end":"2024-05-13T22:36:51.595598Z","steps":["trace[701588623] 'read index received'  (duration: 85.355025ms)","trace[701588623] 'applied index is now lower than readState.Index'  (duration: 16.408246ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T22:36:51.595818Z","caller":"traceutil/trace.go:171","msg":"trace[2027456199] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"107.823274ms","start":"2024-05-13T22:36:51.48794Z","end":"2024-05-13T22:36:51.595763Z","steps":["trace[2027456199] 'process raft request'  (duration: 91.29951ms)","trace[2027456199] 'compare'  (duration: 16.241522ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T22:36:51.595876Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.018808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-functional-950600\" ","response":"range_response_count:1 size:7475"}
	{"level":"info","ts":"2024-05-13T22:36:51.595918Z","caller":"traceutil/trace.go:171","msg":"trace[431282197] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-functional-950600; range_end:; response_count:1; response_revision:456; }","duration":"102.103421ms","start":"2024-05-13T22:36:51.493804Z","end":"2024-05-13T22:36:51.595907Z","steps":["trace[431282197] 'agreement among raft nodes before linearized reading'  (duration: 101.969501ms)"],"step_count":1}
	
	
	==> etcd [e4fd8c86bfda] <==
	{"level":"info","ts":"2024-05-13T22:35:54.039487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-13T22:35:54.039495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-13T22:35:54.043443Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:35:54.048882Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:35:54.049083Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:35:54.049107Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:35:54.04914Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-950600 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-13T22:35:54.084669Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-13T22:35:54.084827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-13T22:35:54.085071Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-13T22:35:54.085204Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-13T22:35:54.087292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-13T22:35:54.087306Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-13T22:36:16.598268Z","caller":"traceutil/trace.go:171","msg":"trace[889581008] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"102.321202ms","start":"2024-05-13T22:36:16.495926Z","end":"2024-05-13T22:36:16.598247Z","steps":["trace[889581008] 'process raft request'  (duration: 91.199927ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T22:36:28.891103Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-13T22:36:28.891237Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-950600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-05-13T22:36:28.891465Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T22:36:28.891828Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/05/13 22:36:28 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-13T22:36:28.99091Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T22:36:28.991091Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-13T22:36:28.991224Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-05-13T22:36:29.089941Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-13T22:36:29.090425Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-13T22:36:29.090573Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-950600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:37:25 up  1:40,  0 users,  load average: 1.14, 1.52, 1.16
	Linux functional-950600 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [33c0d5909d46] <==
	I0513 22:36:50.601803       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0513 22:36:50.602518       1 controller.go:87] Starting OpenAPI V3 controller
	I0513 22:36:50.602521       1 controller.go:139] Starting OpenAPI controller
	I0513 22:36:50.602544       1 naming_controller.go:291] Starting NamingConditionController
	I0513 22:36:50.880009       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0513 22:36:50.880037       1 policy_source.go:224] refreshing policies
	I0513 22:36:50.880099       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0513 22:36:50.880137       1 aggregator.go:165] initial CRD sync complete...
	I0513 22:36:50.880147       1 autoregister_controller.go:141] Starting autoregister controller
	I0513 22:36:50.880156       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0513 22:36:50.880164       1 cache.go:39] Caches are synced for autoregister controller
	I0513 22:36:50.880203       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0513 22:36:50.880233       1 shared_informer.go:320] Caches are synced for configmaps
	I0513 22:36:50.885908       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0513 22:36:50.885926       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0513 22:36:50.886060       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0513 22:36:50.886090       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0513 22:36:50.978572       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0513 22:36:50.979088       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0513 22:36:51.083155       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0513 22:36:51.290040       1 trace.go:236] Trace[476558260]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:657d95fc-abb8-42cb-8b7d-ee0f7e51ca6b,client:192.168.49.2,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (13-May-2024 22:36:50.697) (total time: 592ms):
	Trace[476558260]: [592.774668ms] [592.774668ms] END
	I0513 22:36:51.686114       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0513 22:37:05.281168       1 controller.go:615] quota admission added evaluator for: endpoints
	I0513 22:37:05.322533       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [7a9a4df7fa86] <==
	W0513 22:36:38.058389       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.118820       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.167978       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.175702       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.178712       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.188902       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.208214       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.231432       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.258595       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.260041       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.260216       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.265815       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.279358       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.324765       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.432231       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.493149       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.502570       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.523758       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.544921       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.545324       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.548058       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.602540       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.613522       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.684334       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:36:38.777050       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7fd3055e4bf5] <==
	I0513 22:36:13.783307       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0513 22:36:13.783498       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0513 22:36:13.783593       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0513 22:36:13.783597       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0513 22:36:13.800383       1 shared_informer.go:320] Caches are synced for PV protection
	I0513 22:36:13.803426       1 shared_informer.go:320] Caches are synced for persistent volume
	I0513 22:36:13.882369       1 shared_informer.go:320] Caches are synced for attach detach
	I0513 22:36:14.215149       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 22:36:14.248892       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 22:36:14.248987       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0513 22:36:14.285959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="654.675502ms"
	I0513 22:36:14.483638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="197.38617ms"
	I0513 22:36:14.594595       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.800478ms"
	I0513 22:36:14.594770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.114µs"
	I0513 22:36:17.084126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="199.911291ms"
	I0513 22:36:17.108275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.079624ms"
	I0513 22:36:17.108438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.814µs"
	I0513 22:36:18.498879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.112µs"
	I0513 22:36:18.537897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.012µs"
	I0513 22:36:19.571655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.376865ms"
	I0513 22:36:19.571898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.413µs"
	I0513 22:36:24.091219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.311µs"
	I0513 22:36:24.611521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="248.237µs"
	I0513 22:36:24.635677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.91µs"
	I0513 22:36:24.648122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.31µs"
	
	
	==> kube-controller-manager [ae3bc77db96b] <==
	I0513 22:37:05.175823       1 shared_informer.go:320] Caches are synced for job
	I0513 22:37:05.177191       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0513 22:37:05.176668       1 shared_informer.go:320] Caches are synced for daemon sets
	I0513 22:37:05.177222       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0513 22:37:05.177232       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0513 22:37:05.177241       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0513 22:37:05.176758       1 shared_informer.go:320] Caches are synced for PVC protection
	I0513 22:37:05.176762       1 shared_informer.go:320] Caches are synced for taint
	I0513 22:37:05.176770       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0513 22:37:05.179313       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0513 22:37:05.179391       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-950600"
	I0513 22:37:05.179430       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0513 22:37:05.183237       1 shared_informer.go:320] Caches are synced for attach detach
	I0513 22:37:05.184205       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0513 22:37:05.216639       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0513 22:37:05.275743       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0513 22:37:05.275914       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0513 22:37:05.275985       1 shared_informer.go:320] Caches are synced for disruption
	I0513 22:37:05.276036       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0513 22:37:05.276210       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0513 22:37:05.277619       1 shared_informer.go:320] Caches are synced for resource quota
	I0513 22:37:05.323933       1 shared_informer.go:320] Caches are synced for resource quota
	I0513 22:37:05.792654       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 22:37:05.801969       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 22:37:05.802070       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [59902d194998] <==
	I0513 22:36:17.195313       1 server_linux.go:69] "Using iptables proxy"
	I0513 22:36:17.291866       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0513 22:36:17.484397       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0513 22:36:17.484585       1 server_linux.go:165] "Using iptables Proxier"
	I0513 22:36:17.488516       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0513 22:36:17.488612       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0513 22:36:17.488636       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 22:36:17.489184       1 server.go:872] "Version info" version="v1.30.0"
	I0513 22:36:17.489290       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:36:17.490630       1 config.go:319] "Starting node config controller"
	I0513 22:36:17.490726       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 22:36:17.491012       1 config.go:192] "Starting service config controller"
	I0513 22:36:17.491143       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 22:36:17.491363       1 config.go:101] "Starting endpoint slice config controller"
	I0513 22:36:17.491876       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 22:36:17.590919       1 shared_informer.go:320] Caches are synced for node config
	I0513 22:36:17.591433       1 shared_informer.go:320] Caches are synced for service config
	I0513 22:36:17.592533       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c4096ef9afca] <==
	I0513 22:36:45.282546       1 server_linux.go:69] "Using iptables proxy"
	I0513 22:36:51.381663       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0513 22:36:51.688602       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0513 22:36:51.688781       1 server_linux.go:165] "Using iptables Proxier"
	I0513 22:36:51.695464       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0513 22:36:51.695651       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0513 22:36:51.695893       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 22:36:51.696947       1 server.go:872] "Version info" version="v1.30.0"
	I0513 22:36:51.696987       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:36:51.779336       1 config.go:192] "Starting service config controller"
	I0513 22:36:51.779459       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 22:36:51.779495       1 config.go:101] "Starting endpoint slice config controller"
	I0513 22:36:51.779547       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 22:36:51.779397       1 config.go:319] "Starting node config controller"
	I0513 22:36:51.779607       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 22:36:51.880963       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 22:36:51.881195       1 shared_informer.go:320] Caches are synced for service config
	I0513 22:36:51.881209       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [76ca5379b8c8] <==
	I0513 22:36:46.394298       1 serving.go:380] Generated self-signed cert in-memory
	I0513 22:36:51.390069       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0513 22:36:51.390207       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:36:51.485563       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0513 22:36:51.485876       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0513 22:36:51.485906       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0513 22:36:51.485567       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0513 22:36:51.486182       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0513 22:36:51.485852       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 22:36:51.485886       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0513 22:36:51.485913       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0513 22:36:51.587203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 22:36:51.587368       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0513 22:36:51.587817       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [e522f513430b] <==
	E0513 22:35:57.912775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 22:35:57.957039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 22:35:57.957139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0513 22:35:58.077758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0513 22:35:58.077863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0513 22:35:58.126106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0513 22:35:58.126205       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0513 22:35:58.142958       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 22:35:58.143054       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 22:35:58.205650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 22:35:58.205761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 22:35:58.216358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 22:35:58.216458       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0513 22:35:58.337548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 22:35:58.337655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0513 22:35:58.348785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 22:35:58.348892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0513 22:35:58.434397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 22:35:58.434504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0513 22:35:58.439401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0513 22:35:58.439491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0513 22:35:58.499575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 22:35:58.499668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0513 22:36:00.513559       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0513 22:36:28.888104       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.189647    2510 status_manager.go:853] "Failed to get status for pod" podUID="a644984a-80a5-459f-8986-72748e5f8487" pod="kube-system/kube-proxy-bqx72" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bqx72\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.190405    2510 status_manager.go:853] "Failed to get status for pod" podUID="803d730e-40db-44ae-a985-172df4c93426" pod="kube-system/coredns-7db6d8ff4d-tctr7" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tctr7\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.201089    2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e423681f729307b8bf7bd1e5fe5c12b693db7ab9dfefa13b50e8c2c80c791ad"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.202513    2510 status_manager.go:853] "Failed to get status for pod" podUID="803d730e-40db-44ae-a985-172df4c93426" pod="kube-system/coredns-7db6d8ff4d-tctr7" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tctr7\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.204258    2510 status_manager.go:853] "Failed to get status for pod" podUID="197b8324-4017-4bce-9f03-e06f5620dfe4" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.205018    2510 status_manager.go:853] "Failed to get status for pod" podUID="cb9da9f07e863e25e41a37ea3f02d3c6" pod="kube-system/kube-scheduler-functional-950600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.206182    2510 status_manager.go:853] "Failed to get status for pod" podUID="1d9743e386fa70978c848291d403c78f" pod="kube-system/etcd-functional-950600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-950600\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.207171    2510 status_manager.go:853] "Failed to get status for pod" podUID="d1cffe0942dee6a504e219504e6bc04e" pod="kube-system/kube-apiserver-functional-950600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-950600\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.208140    2510 status_manager.go:853] "Failed to get status for pod" podUID="46be264c56c4fbfea546ac2e2e49dbfc" pod="kube-system/kube-controller-manager-functional-950600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-950600\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:42 functional-950600 kubelet[2510]: I0513 22:36:42.208616    2510 status_manager.go:853] "Failed to get status for pod" podUID="a644984a-80a5-459f-8986-72748e5f8487" pod="kube-system/kube-proxy-bqx72" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bqx72\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:43 functional-950600 kubelet[2510]: I0513 22:36:43.590829    2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13b30805ec886c886a33f67c5f4f376c8bbeae0aa48bc4668e1e8324bd3d64d"
	May 13 22:36:43 functional-950600 kubelet[2510]: I0513 22:36:43.687036    2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b0cb8ff194698cc049dda5e623ded078e4dbe16d98b6517267cf7bfbd40fa1d"
	May 13 22:36:43 functional-950600 kubelet[2510]: E0513 22:36:43.999097    2510 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-950600?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.188710    2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b09fdd16ccc73d20d3cf82d0b90b074b1c156b8ff267b17ca809b88d984431"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.283831    2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c250e604bbeaf5e327af9b7b025f3940254290503f24a4047110b13e73864d"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.484857    2510 status_manager.go:853] "Failed to get status for pod" podUID="1d9743e386fa70978c848291d403c78f" pod="kube-system/etcd-functional-950600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-950600\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.485402    2510 status_manager.go:853] "Failed to get status for pod" podUID="d1cffe0942dee6a504e219504e6bc04e" pod="kube-system/kube-apiserver-functional-950600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-950600\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.485912    2510 status_manager.go:853] "Failed to get status for pod" podUID="46be264c56c4fbfea546ac2e2e49dbfc" pod="kube-system/kube-controller-manager-functional-950600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-950600\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.486544    2510 status_manager.go:853] "Failed to get status for pod" podUID="a644984a-80a5-459f-8986-72748e5f8487" pod="kube-system/kube-proxy-bqx72" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bqx72\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.487223    2510 status_manager.go:853] "Failed to get status for pod" podUID="803d730e-40db-44ae-a985-172df4c93426" pod="kube-system/coredns-7db6d8ff4d-tctr7" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tctr7\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.487784    2510 status_manager.go:853] "Failed to get status for pod" podUID="197b8324-4017-4bce-9f03-e06f5620dfe4" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:44 functional-950600 kubelet[2510]: I0513 22:36:44.488451    2510 status_manager.go:853] "Failed to get status for pod" podUID="cb9da9f07e863e25e41a37ea3f02d3c6" pod="kube-system/kube-scheduler-functional-950600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-950600\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 13 22:36:50 functional-950600 kubelet[2510]: E0513 22:36:50.878921    2510 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	May 13 22:36:50 functional-950600 kubelet[2510]: E0513 22:36:50.878952    2510 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	May 13 22:36:50 functional-950600 kubelet[2510]: E0513 22:36:50.879065    2510 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	
	==> storage-provisioner [506e9dfb6835] <==
	I0513 22:36:19.069424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0513 22:36:19.089933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0513 22:36:19.090036       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0513 22:36:19.106855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0513 22:36:19.107070       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-950600_0851d5bb-be96-4a90-8c0b-a194383276f2!
	I0513 22:36:19.107059       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"407fc8f2-ce11-4e07-9872-88bcddaf695f", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-950600_0851d5bb-be96-4a90-8c0b-a194383276f2 became leader
	I0513 22:36:19.208371       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-950600_0851d5bb-be96-4a90-8c0b-a194383276f2!
	
	
	==> storage-provisioner [91c97ddd7812] <==
	I0513 22:36:45.289888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0513 22:36:51.282087       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0513 22:36:51.282146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0513 22:37:08.786307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0513 22:37:08.786950       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-950600_924fbae6-96f0-4902-a2c6-75211ec76d91!
	I0513 22:37:08.786849       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"407fc8f2-ce11-4e07-9872-88bcddaf695f", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-950600_924fbae6-96f0-4902-a2c6-75211ec76d91 became leader
	I0513 22:37:08.888031       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-950600_924fbae6-96f0-4902-a2c6-75211ec76d91!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:37:24.064708    1728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-950600 -n functional-950600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-950600 -n functional-950600: (1.3064071s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-950600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (6.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-950600 config unset cpus" to be -""- but got *"W0513 22:38:32.586572    2164 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-950600 config get cpus: exit status 14 (253.09ms)

                                                
                                                
** stderr ** 
	W0513 22:38:32.884729   12204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-950600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0513 22:38:32.884729   12204 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-950600 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0513 22:38:33.129738   10212 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-950600 config get cpus" to be -""- but got *"W0513 22:38:33.393748    4668 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-950600 config unset cpus" to be -""- but got *"W0513 22:38:33.671035    8468 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-950600 config get cpus: exit status 14 (224.9933ms)

                                                
                                                
** stderr ** 
	W0513 22:38:33.943469    5192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-950600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0513 22:38:33.943469    5192 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube4\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (430.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-873100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0513 23:48:36.995677   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 23:50:30.092961   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-873100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: exit status 80 (6m56.3394558s)

                                                
                                                
-- stdout --
	* [old-k8s-version-873100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-873100" primary control-plane node in "old-k8s-version-873100" cluster
	* Pulling base image v0.0.44 ...
	* Restarting existing docker container for "old-k8s-version-873100" ...
	* Preparing Kubernetes v1.20.0 on Docker 26.1.1 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-873100 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, dashboard, metrics-server
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:47:50.813454   10160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 23:47:50.888006   10160 out.go:291] Setting OutFile to fd 1884 ...
	I0513 23:47:50.888460   10160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:47:50.888561   10160 out.go:304] Setting ErrFile to fd 1776...
	I0513 23:47:50.888561   10160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:47:50.910141   10160 out.go:298] Setting JSON to false
	I0513 23:47:50.913987   10160 start.go:129] hostinfo: {"hostname":"minikube4","uptime":10309,"bootTime":1715633761,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 23:47:50.914524   10160 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 23:47:50.917771   10160 out.go:177] * [old-k8s-version-873100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 23:47:50.920909   10160 notify.go:220] Checking for updates...
	I0513 23:47:50.923458   10160 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 23:47:50.925948   10160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 23:47:50.928184   10160 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 23:47:50.932150   10160 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 23:47:50.934652   10160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 23:47:50.938183   10160 config.go:182] Loaded profile config "old-k8s-version-873100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 23:47:50.941912   10160 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0513 23:47:50.945887   10160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 23:47:51.242344   10160 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 23:47:51.251742   10160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 23:47:51.651368   10160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:97 SystemTime:2024-05-13 23:47:51.591737787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 23:47:51.655698   10160 out.go:177] * Using the docker driver based on existing profile
	I0513 23:47:51.659062   10160 start.go:297] selected driver: docker
	I0513 23:47:51.659062   10160 start.go:901] validating driver "docker" against &{Name:old-k8s-version-873100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-873100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.mi
nikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:47:51.659062   10160 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 23:47:51.780730   10160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 23:47:52.118010   10160 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:97 SystemTime:2024-05-13 23:47:52.081870873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 23:47:52.118010   10160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:47:52.118010   10160 cni.go:84] Creating CNI manager for ""
	I0513 23:47:52.118010   10160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 23:47:52.118010   10160 start.go:340] cluster config:
	{Name:old-k8s-version-873100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-873100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:47:52.121003   10160 out.go:177] * Starting "old-k8s-version-873100" primary control-plane node in "old-k8s-version-873100" cluster
	I0513 23:47:52.127017   10160 cache.go:121] Beginning downloading kic base image for docker with docker
	I0513 23:47:52.129010   10160 out.go:177] * Pulling base image v0.0.44 ...
	I0513 23:47:52.132002   10160 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 23:47:52.132002   10160 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon
	I0513 23:47:52.132002   10160 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0513 23:47:52.132002   10160 cache.go:56] Caching tarball of preloaded images
	I0513 23:47:52.132002   10160 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:47:52.132002   10160 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0513 23:47:52.133004   10160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\config.json ...
	I0513 23:47:52.307917   10160 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon, skipping pull
	I0513 23:47:52.307917   10160 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e exists in daemon, skipping load
	I0513 23:47:52.307917   10160 cache.go:194] Successfully downloaded all kic artifacts
	I0513 23:47:52.307917   10160 start.go:360] acquireMachinesLock for old-k8s-version-873100: {Name:mk23ac0acfdc4fdab999ab231c554ff791f80509 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 23:47:52.308918   10160 start.go:364] duration metric: took 1.0002ms to acquireMachinesLock for "old-k8s-version-873100"
	I0513 23:47:52.308918   10160 start.go:96] Skipping create...Using existing machine configuration
	I0513 23:47:52.308918   10160 fix.go:54] fixHost starting: 
	I0513 23:47:52.324915   10160 cli_runner.go:164] Run: docker container inspect old-k8s-version-873100 --format={{.State.Status}}
	I0513 23:47:52.482441   10160 fix.go:112] recreateIfNeeded on old-k8s-version-873100: state=Stopped err=<nil>
	W0513 23:47:52.482441   10160 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 23:47:52.489457   10160 out.go:177] * Restarting existing docker container for "old-k8s-version-873100" ...
	I0513 23:47:52.501464   10160 cli_runner.go:164] Run: docker start old-k8s-version-873100
	I0513 23:47:53.189943   10160 cli_runner.go:164] Run: docker container inspect old-k8s-version-873100 --format={{.State.Status}}
	I0513 23:47:53.370772   10160 kic.go:430] container "old-k8s-version-873100" state is running.
	I0513 23:47:53.383367   10160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-873100
	I0513 23:47:53.576781   10160 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\config.json ...
	I0513 23:47:53.579660   10160 machine.go:94] provisionDockerMachine start ...
	I0513 23:47:53.589666   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:47:53.786447   10160 main.go:141] libmachine: Using SSH client type: native
	I0513 23:47:53.786447   10160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56670 <nil> <nil>}
	I0513 23:47:53.786447   10160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 23:47:53.791426   10160 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0513 23:47:56.986977   10160 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-873100
	
	I0513 23:47:56.986977   10160 ubuntu.go:169] provisioning hostname "old-k8s-version-873100"
	I0513 23:47:56.995974   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:47:57.165336   10160 main.go:141] libmachine: Using SSH client type: native
	I0513 23:47:57.165940   10160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56670 <nil> <nil>}
	I0513 23:47:57.165940   10160 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-873100 && echo "old-k8s-version-873100" | sudo tee /etc/hostname
	I0513 23:47:57.377719   10160 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-873100
	
	I0513 23:47:57.389415   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:47:57.553727   10160 main.go:141] libmachine: Using SSH client type: native
	I0513 23:47:57.554695   10160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56670 <nil> <nil>}
	I0513 23:47:57.554695   10160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-873100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-873100/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-873100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 23:47:57.740259   10160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 23:47:57.740259   10160 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0513 23:47:57.740259   10160 ubuntu.go:177] setting up certificates
	I0513 23:47:57.740259   10160 provision.go:84] configureAuth start
	I0513 23:47:57.750687   10160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-873100
	I0513 23:47:57.923439   10160 provision.go:143] copyHostCerts
	I0513 23:47:57.923480   10160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0513 23:47:57.923480   10160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0513 23:47:57.924333   10160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0513 23:47:57.925168   10160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0513 23:47:57.925168   10160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0513 23:47:57.925168   10160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 23:47:57.926838   10160 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0513 23:47:57.926838   10160 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0513 23:47:57.926838   10160 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0513 23:47:57.928033   10160 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-873100 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-873100]
	I0513 23:47:58.100569   10160 provision.go:177] copyRemoteCerts
	I0513 23:47:58.117267   10160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 23:47:58.128271   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:47:58.297237   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:47:58.436933   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0513 23:47:58.479675   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0513 23:47:58.525062   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0513 23:47:58.567154   10160 provision.go:87] duration metric: took 826.8576ms to configureAuth
	I0513 23:47:58.567720   10160 ubuntu.go:193] setting minikube options for container-runtime
	I0513 23:47:58.568255   10160 config.go:182] Loaded profile config "old-k8s-version-873100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 23:47:58.581759   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:47:58.755872   10160 main.go:141] libmachine: Using SSH client type: native
	I0513 23:47:58.756531   10160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56670 <nil> <nil>}
	I0513 23:47:58.756531   10160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 23:47:58.951533   10160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0513 23:47:58.952133   10160 ubuntu.go:71] root file system type: overlay
	I0513 23:47:58.952248   10160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 23:47:58.964368   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:47:59.148213   10160 main.go:141] libmachine: Using SSH client type: native
	I0513 23:47:59.148361   10160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56670 <nil> <nil>}
	I0513 23:47:59.148900   10160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 23:47:59.358082   10160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 23:47:59.369000   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:47:59.547915   10160 main.go:141] libmachine: Using SSH client type: native
	I0513 23:47:59.548086   10160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56670 <nil> <nil>}
	I0513 23:47:59.548086   10160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 23:47:59.753966   10160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 23:47:59.754098   10160 machine.go:97] duration metric: took 6.1741571s to provisionDockerMachine
	I0513 23:47:59.754098   10160 start.go:293] postStartSetup for "old-k8s-version-873100" (driver="docker")
	I0513 23:47:59.754098   10160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 23:47:59.769594   10160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 23:47:59.780232   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:47:59.967171   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:48:00.120498   10160 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 23:48:00.128739   10160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0513 23:48:00.128739   10160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0513 23:48:00.128739   10160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0513 23:48:00.128739   10160 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0513 23:48:00.128739   10160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0513 23:48:00.129458   10160 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0513 23:48:00.130156   10160 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem -> 158682.pem in /etc/ssl/certs
	I0513 23:48:00.143971   10160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 23:48:00.163973   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem --> /etc/ssl/certs/158682.pem (1708 bytes)
	I0513 23:48:00.208502   10160 start.go:296] duration metric: took 454.3835ms for postStartSetup
	I0513 23:48:00.219507   10160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 23:48:00.229570   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:00.399200   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:48:00.542314   10160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0513 23:48:00.559070   10160 fix.go:56] duration metric: took 8.2497771s for fixHost
	I0513 23:48:00.559070   10160 start.go:83] releasing machines lock for "old-k8s-version-873100", held for 8.2497771s
	I0513 23:48:00.568073   10160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-873100
	I0513 23:48:00.747633   10160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 23:48:00.759395   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:00.759872   10160 ssh_runner.go:195] Run: cat /version.json
	I0513 23:48:00.770022   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:00.928876   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:48:00.929882   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:48:01.219623   10160 ssh_runner.go:195] Run: systemctl --version
	I0513 23:48:01.243216   10160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 23:48:01.268031   10160 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0513 23:48:01.283024   10160 start.go:438] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0513 23:48:01.296356   10160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0513 23:48:01.345563   10160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0513 23:48:01.377414   10160 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:48:01.377414   10160 start.go:494] detecting cgroup driver to use...
	I0513 23:48:01.377414   10160 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0513 23:48:01.378088   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:48:01.425870   10160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0513 23:48:01.472223   10160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:48:01.506081   10160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:48:01.519502   10160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:48:01.552074   10160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:48:01.588182   10160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:48:01.635738   10160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:48:01.669738   10160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:48:01.706490   10160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 23:48:01.741552   10160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 23:48:01.770593   10160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 23:48:01.806214   10160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:48:01.973641   10160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 23:48:02.147802   10160 start.go:494] detecting cgroup driver to use...
	I0513 23:48:02.147864   10160 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0513 23:48:02.161283   10160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 23:48:02.188456   10160 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0513 23:48:02.207799   10160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:48:02.231022   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:48:02.292978   10160 ssh_runner.go:195] Run: which cri-dockerd
	I0513 23:48:02.320711   10160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 23:48:02.345225   10160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 23:48:02.394227   10160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 23:48:02.629097   10160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 23:48:02.819366   10160 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 23:48:02.819906   10160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 23:48:02.879052   10160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:48:03.097550   10160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:48:03.846669   10160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:48:03.910991   10160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:48:03.975555   10160 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 26.1.1 ...
	I0513 23:48:03.985571   10160 cli_runner.go:164] Run: docker exec -t old-k8s-version-873100 dig +short host.docker.internal
	I0513 23:48:04.279317   10160 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0513 23:48:04.291158   10160 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0513 23:48:04.302163   10160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:48:04.332110   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:04.499224   10160 kubeadm.go:877] updating cluster {Name:old-k8s-version-873100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-873100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0513 23:48:04.499408   10160 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 23:48:04.510963   10160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 23:48:04.554672   10160 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0513 23:48:04.554740   10160 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0513 23:48:04.568173   10160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 23:48:04.598919   10160 ssh_runner.go:195] Run: which lz4
	I0513 23:48:04.625981   10160 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0513 23:48:04.633960   10160 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 23:48:04.634954   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
	I0513 23:48:17.192964   10160 docker.go:649] duration metric: took 12.5781677s to copy over tarball
	I0513 23:48:17.206713   10160 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0513 23:48:21.985984   10160 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.7789981s)
	I0513 23:48:21.986231   10160 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0513 23:48:22.102376   10160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 23:48:22.129159   10160 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2824 bytes)
	I0513 23:48:22.180319   10160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:48:22.340219   10160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:48:31.368360   10160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (9.0277303s)
	I0513 23:48:31.378703   10160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 23:48:31.430440   10160 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0513 23:48:31.430498   10160 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0513 23:48:31.430498   10160 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0513 23:48:31.449748   10160 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0513 23:48:31.457974   10160 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 23:48:31.459683   10160 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0513 23:48:31.462229   10160 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0513 23:48:31.476557   10160 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0513 23:48:31.476872   10160 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0513 23:48:31.476872   10160 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0513 23:48:31.476872   10160 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0513 23:48:31.477526   10160 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0513 23:48:31.489213   10160 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0513 23:48:31.491390   10160 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0513 23:48:31.492610   10160 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 23:48:31.501462   10160 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0513 23:48:31.514993   10160 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0513 23:48:31.527023   10160 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0513 23:48:31.539671   10160 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	W0513 23:48:31.602757   10160 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0513 23:48:31.725598   10160 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0513 23:48:31.820652   10160 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0513 23:48:31.913747   10160 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0513 23:48:32.008858   10160 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0513 23:48:32.091877   10160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0513 23:48:32.096687   10160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	W0513 23:48:32.109604   10160 image.go:187] authn lookup for registry.k8s.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0513 23:48:32.125648   10160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0513 23:48:32.171652   10160 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0513 23:48:32.171652   10160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0513 23:48:32.171652   10160 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0513 23:48:32.182320   10160 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0513 23:48:32.182320   10160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0513 23:48:32.182320   10160 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0513 23:48:32.188862   10160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0513 23:48:32.202014   10160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0513 23:48:32.223076   10160 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0513 23:48:32.223145   10160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.13-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0513 23:48:32.223216   10160 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
	W0513 23:48:32.231035   10160 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0513 23:48:32.234147   10160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
	I0513 23:48:32.242871   10160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0513 23:48:32.293186   10160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0513 23:48:32.349404   10160 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0513 23:48:32.374369   10160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0513 23:48:32.460545   10160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0513 23:48:32.460837   10160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0513 23:48:32.466881   10160 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0513 23:48:32.466881   10160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0513 23:48:32.466881   10160 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0513 23:48:32.471211   10160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0513 23:48:32.479224   10160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
	I0513 23:48:32.511779   10160 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0513 23:48:32.512869   10160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0513 23:48:32.512869   10160 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0513 23:48:32.523246   10160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0513 23:48:32.569376   10160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0513 23:48:32.569853   10160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0513 23:48:32.595594   10160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0513 23:48:32.627539   10160 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0513 23:48:32.627539   10160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.7.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0513 23:48:32.627539   10160 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
	I0513 23:48:32.640037   10160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
	I0513 23:48:32.688988   10160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0513 23:48:32.860207   10160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0513 23:48:32.915097   10160 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0513 23:48:32.915209   10160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0513 23:48:32.915209   10160 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0513 23:48:32.930704   10160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0513 23:48:32.971849   10160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0513 23:48:32.972065   10160 cache_images.go:92] duration metric: took 1.5414967s to LoadCachedImages
	W0513 23:48:32.972065   10160 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0: The system cannot find the file specified.
	I0513 23:48:32.972065   10160 kubeadm.go:928] updating node { 192.168.94.2 8443 v1.20.0 docker true true} ...
	I0513 23:48:32.972770   10160 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-873100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-873100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 23:48:32.986065   10160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 23:48:33.092619   10160 cni.go:84] Creating CNI manager for ""
	I0513 23:48:33.092651   10160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 23:48:33.092743   10160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 23:48:33.092743   10160 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-873100 NodeName:old-k8s-version-873100 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0513 23:48:33.092992   10160 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-873100"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 23:48:33.111384   10160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0513 23:48:33.130004   10160 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 23:48:33.145732   10160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0513 23:48:33.162872   10160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0513 23:48:33.198743   10160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 23:48:33.234761   10160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0513 23:48:33.278355   10160 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0513 23:48:33.301028   10160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:48:33.340601   10160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:48:33.507125   10160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:48:33.537359   10160 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100 for IP: 192.168.94.2
	I0513 23:48:33.537359   10160 certs.go:194] generating shared ca certs ...
	I0513 23:48:33.537504   10160 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:48:33.538098   10160 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0513 23:48:33.538315   10160 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0513 23:48:33.538315   10160 certs.go:256] generating profile certs ...
	I0513 23:48:33.539350   10160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.key
	I0513 23:48:33.539350   10160 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\apiserver.key.1fdebec5
	I0513 23:48:33.540071   10160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\proxy-client.key
	I0513 23:48:33.541217   10160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\15868.pem (1338 bytes)
	W0513 23:48:33.541217   10160 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\15868_empty.pem, impossibly tiny 0 bytes
	I0513 23:48:33.541217   10160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0513 23:48:33.542750   10160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0513 23:48:33.543366   10160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 23:48:33.543473   10160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0513 23:48:33.544188   10160 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem (1708 bytes)
	I0513 23:48:33.545976   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 23:48:33.587300   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 23:48:33.641080   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 23:48:33.688663   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 23:48:33.757660   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0513 23:48:33.799231   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 23:48:33.874385   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 23:48:33.922073   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0513 23:48:34.008026   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem --> /usr/share/ca-certificates/158682.pem (1708 bytes)
	I0513 23:48:34.091640   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 23:48:34.195569   10160 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\15868.pem --> /usr/share/ca-certificates/15868.pem (1338 bytes)
	I0513 23:48:34.249682   10160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 23:48:34.297218   10160 ssh_runner.go:195] Run: openssl version
	I0513 23:48:34.338145   10160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/158682.pem && ln -fs /usr/share/ca-certificates/158682.pem /etc/ssl/certs/158682.pem"
	I0513 23:48:34.391214   10160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/158682.pem
	I0513 23:48:34.469125   10160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:34 /usr/share/ca-certificates/158682.pem
	I0513 23:48:34.489440   10160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/158682.pem
	I0513 23:48:34.586277   10160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/158682.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 23:48:34.702639   10160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 23:48:34.792627   10160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:48:34.857002   10160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:25 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:48:34.872398   10160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:48:34.906516   10160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 23:48:34.996820   10160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15868.pem && ln -fs /usr/share/ca-certificates/15868.pem /etc/ssl/certs/15868.pem"
	I0513 23:48:35.035795   10160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15868.pem
	I0513 23:48:35.047509   10160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:34 /usr/share/ca-certificates/15868.pem
	I0513 23:48:35.061240   10160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15868.pem
	I0513 23:48:35.093065   10160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15868.pem /etc/ssl/certs/51391683.0"
	I0513 23:48:35.129374   10160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 23:48:35.155452   10160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0513 23:48:35.191723   10160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0513 23:48:35.222791   10160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0513 23:48:35.255196   10160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0513 23:48:35.286785   10160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0513 23:48:35.318241   10160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0513 23:48:35.337259   10160 kubeadm.go:391] StartCluster: {Name:old-k8s-version-873100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-873100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:48:35.350936   10160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 23:48:35.495531   10160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0513 23:48:35.574450   10160 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0513 23:48:35.574450   10160 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0513 23:48:35.574450   10160 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0513 23:48:35.596905   10160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0513 23:48:35.670371   10160 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0513 23:48:35.689959   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:35.871731   10160 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-873100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 23:48:35.873641   10160 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-873100" cluster setting kubeconfig missing "old-k8s-version-873100" context setting]
	I0513 23:48:35.876389   10160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:48:35.904214   10160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0513 23:48:35.975170   10160 kubeadm.go:624] The running cluster does not require reconfiguration: 127.0.0.1
	I0513 23:48:35.975170   10160 kubeadm.go:591] duration metric: took 400.7018ms to restartPrimaryControlPlane
	I0513 23:48:35.975170   10160 kubeadm.go:393] duration metric: took 637.8821ms to StartCluster
	I0513 23:48:35.975170   10160 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:48:35.975170   10160 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 23:48:35.986539   10160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:48:35.988907   10160 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:48:35.988907   10160 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 23:48:35.989297   10160 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-873100"
	I0513 23:48:35.989297   10160 addons.go:69] Setting dashboard=true in profile "old-k8s-version-873100"
	I0513 23:48:35.989297   10160 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-873100"
	I0513 23:48:35.989297   10160 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-873100"
	W0513 23:48:35.989297   10160 addons.go:243] addon storage-provisioner should already be in state true
	I0513 23:48:35.989297   10160 config.go:182] Loaded profile config "old-k8s-version-873100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 23:48:35.989297   10160 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-873100"
	I0513 23:48:35.989468   10160 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-873100"
	W0513 23:48:35.989468   10160 addons.go:243] addon metrics-server should already be in state true
	I0513 23:48:35.989468   10160 host.go:66] Checking if "old-k8s-version-873100" exists ...
	I0513 23:48:35.989297   10160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-873100"
	I0513 23:48:35.989297   10160 addons.go:234] Setting addon dashboard=true in "old-k8s-version-873100"
	W0513 23:48:35.994742   10160 addons.go:243] addon dashboard should already be in state true
	I0513 23:48:35.994742   10160 out.go:177] * Verifying Kubernetes components...
	I0513 23:48:35.989468   10160 host.go:66] Checking if "old-k8s-version-873100" exists ...
	I0513 23:48:35.994946   10160 host.go:66] Checking if "old-k8s-version-873100" exists ...
	I0513 23:48:36.020742   10160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:48:36.027468   10160 cli_runner.go:164] Run: docker container inspect old-k8s-version-873100 --format={{.State.Status}}
	I0513 23:48:36.031175   10160 cli_runner.go:164] Run: docker container inspect old-k8s-version-873100 --format={{.State.Status}}
	I0513 23:48:36.042156   10160 cli_runner.go:164] Run: docker container inspect old-k8s-version-873100 --format={{.State.Status}}
	I0513 23:48:36.042737   10160 cli_runner.go:164] Run: docker container inspect old-k8s-version-873100 --format={{.State.Status}}
	I0513 23:48:36.307210   10160 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0513 23:48:36.308220   10160 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0513 23:48:36.313869   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0513 23:48:36.313869   10160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0513 23:48:36.322561   10160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 23:48:36.324565   10160 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 23:48:36.324565   10160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 23:48:36.324565   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:36.337659   10160 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-873100"
	W0513 23:48:36.337761   10160 addons.go:243] addon default-storageclass should already be in state true
	I0513 23:48:36.337869   10160 host.go:66] Checking if "old-k8s-version-873100" exists ...
	I0513 23:48:36.338510   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:36.353780   10160 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0513 23:48:36.356025   10160 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0513 23:48:36.356077   10160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0513 23:48:36.377301   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:36.384799   10160 cli_runner.go:164] Run: docker container inspect old-k8s-version-873100 --format={{.State.Status}}
	I0513 23:48:36.614017   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:48:36.630531   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:48:36.655330   10160 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 23:48:36.655388   10160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 23:48:36.674715   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:48:36.676603   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:36.897340   10160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56670 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-873100\id_rsa Username:docker}
	I0513 23:48:36.990207   10160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:48:37.190990   10160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-873100
	I0513 23:48:37.372916   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0513 23:48:37.372916   10160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0513 23:48:37.387269   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 23:48:37.424285   10160 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-873100" to be "Ready" ...
	I0513 23:48:37.469734   10160 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0513 23:48:37.469734   10160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0513 23:48:37.684509   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 23:48:37.773511   10160 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0513 23:48:37.773511   10160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0513 23:48:37.773511   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0513 23:48:37.774133   10160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0513 23:48:38.073527   10160 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0513 23:48:38.073527   10160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0513 23:48:38.073611   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0513 23:48:38.073611   10160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0513 23:48:38.361099   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0513 23:48:38.361099   10160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0513 23:48:38.382700   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0513 23:48:38.476631   10160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0890149s)
	W0513 23:48:38.476631   10160 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:38.476631   10160 retry.go:31] will retry after 257.679806ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:38.565318   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0513 23:48:38.565384   10160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0513 23:48:38.685596   10160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.0010416s)
	W0513 23:48:38.685596   10160 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:38.685596   10160 retry.go:31] will retry after 261.581503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:38.757086   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 23:48:38.765303   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0513 23:48:38.765303   10160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0513 23:48:38.877191   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0513 23:48:38.877191   10160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0513 23:48:38.981912   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0513 23:48:39.057424   10160 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:39.057563   10160 retry.go:31] will retry after 336.052016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:39.076053   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0513 23:48:39.076594   10160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0513 23:48:39.262089   10160 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0513 23:48:39.262385   10160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0513 23:48:39.420137   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0513 23:48:39.457304   10160 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:39.457304   10160 retry.go:31] will retry after 397.637003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:39.485525   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0513 23:48:39.760129   10160 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:39.760393   10160 retry.go:31] will retry after 496.606984ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:39.884907   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0513 23:48:40.270566   10160 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0513 23:48:40.270686   10160 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:40.270686   10160 retry.go:31] will retry after 330.208494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:40.270686   10160 retry.go:31] will retry after 348.582547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0513 23:48:40.285260   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0513 23:48:40.620583   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0513 23:48:40.630895   10160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0513 23:48:49.464709   10160 node_ready.go:49] node "old-k8s-version-873100" has status "Ready":"True"
	I0513 23:48:49.464882   10160 node_ready.go:38] duration metric: took 12.0400489s for node "old-k8s-version-873100" to be "Ready" ...
	I0513 23:48:49.464941   10160 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:48:49.873175   10160 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-wn76j" in "kube-system" namespace to be "Ready" ...
	I0513 23:48:50.169690   10160 pod_ready.go:92] pod "coredns-74ff55c5b-wn76j" in "kube-system" namespace has status "Ready":"True"
	I0513 23:48:50.169690   10160 pod_ready.go:81] duration metric: took 296.5018ms for pod "coredns-74ff55c5b-wn76j" in "kube-system" namespace to be "Ready" ...
	I0513 23:48:50.169690   10160 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-873100" in "kube-system" namespace to be "Ready" ...
	I0513 23:48:50.457108   10160 pod_ready.go:92] pod "etcd-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:48:50.457108   10160 pod_ready.go:81] duration metric: took 287.4041ms for pod "etcd-old-k8s-version-873100" in "kube-system" namespace to be "Ready" ...
	I0513 23:48:50.457108   10160 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-873100" in "kube-system" namespace to be "Ready" ...
	I0513 23:48:50.559888   10160 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:48:50.559888   10160 pod_ready.go:81] duration metric: took 102.7753ms for pod "kube-apiserver-old-k8s-version-873100" in "kube-system" namespace to be "Ready" ...
	I0513 23:48:50.559888   10160 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace to be "Ready" ...
	I0513 23:48:51.773236   10160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.8877885s)
	I0513 23:48:51.773389   10160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.4874533s)
	I0513 23:48:52.675485   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:48:54.070009   10160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.4385022s)
	I0513 23:48:54.074680   10160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-873100 addons enable metrics-server
	
	I0513 23:48:54.070009   10160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.4487698s)
	I0513 23:48:54.074793   10160 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-873100"
	I0513 23:48:54.081123   10160 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, dashboard, metrics-server
	I0513 23:48:54.089150   10160 addons.go:505] duration metric: took 18.0994197s for enable addons: enabled=[storage-provisioner default-storageclass dashboard metrics-server]
	I0513 23:48:55.163908   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:48:57.583179   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:00.091345   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:02.591676   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:05.095011   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:07.586169   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:10.089698   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:12.763898   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:15.087358   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:17.101436   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:19.600615   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:22.100655   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:24.580512   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:26.585124   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:28.595255   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:31.323781   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:33.588556   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:36.153098   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:38.593893   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:41.079947   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:43.086949   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:45.095934   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:47.594458   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:50.087840   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:52.094042   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:54.094675   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:56.603209   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:49:59.090129   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:01.599028   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:04.092771   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:06.602070   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:09.087177   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:11.587575   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:13.591653   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:15.600671   10160 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:16.096160   10160 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:50:16.096160   10160 pod_ready.go:81] duration metric: took 1m25.5323804s for pod "kube-controller-manager-old-k8s-version-873100" in "kube-system" namespace to be "Ready" ...
	I0513 23:50:16.096160   10160 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ndb7h" in "kube-system" namespace to be "Ready" ...
	I0513 23:50:16.111053   10160 pod_ready.go:92] pod "kube-proxy-ndb7h" in "kube-system" namespace has status "Ready":"True"
	I0513 23:50:16.111121   10160 pod_ready.go:81] duration metric: took 14.9609ms for pod "kube-proxy-ndb7h" in "kube-system" namespace to be "Ready" ...
	I0513 23:50:16.111161   10160 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-873100" in "kube-system" namespace to be "Ready" ...
	I0513 23:50:16.644091   10160 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-873100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:50:16.644182   10160 pod_ready.go:81] duration metric: took 532.9974ms for pod "kube-scheduler-old-k8s-version-873100" in "kube-system" namespace to be "Ready" ...
	I0513 23:50:16.644246   10160 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace to be "Ready" ...
	I0513 23:50:18.659724   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:20.667156   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:22.678105   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:25.174576   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:27.667870   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:29.753305   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:32.162826   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:34.178282   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:36.671819   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:38.678140   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:41.166818   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:43.171428   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:45.671806   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:47.672155   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:50.174394   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:52.675743   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:54.680338   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:56.703724   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:50:59.177944   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:01.674697   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:03.680672   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:06.168810   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:08.205201   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:10.675682   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:12.681026   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:15.191789   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:17.210231   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:19.686269   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:22.191246   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:24.682056   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:27.190295   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:29.682021   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:32.183605   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:34.677985   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:39.236415   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:42.617674   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:44.675716   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:46.683703   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:49.197325   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:51.678716   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:54.191847   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:56.681122   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:58.683717   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:00.693547   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:03.636469   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:05.679849   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:07.683373   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:10.181751   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:12.197545   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:14.649746   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:16.693134   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:19.180465   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:21.676129   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:23.694896   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:25.737711   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:28.193016   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:30.689932   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:33.197224   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:35.703352   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:38.186537   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:40.685185   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:43.191024   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:45.684309   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:48.189548   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:51.290219   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:55.187357   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:57.685556   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:00.194651   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:02.684638   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:05.185543   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:07.674052   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:09.683178   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:11.685720   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:13.689607   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:16.185301   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:18.186047   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:20.193924   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:22.678578   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:24.685310   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:26.686486   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:29.192054   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:31.688831   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:34.173614   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:36.180227   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:38.192957   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:40.693117   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:42.713301   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:45.180525   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:47.186903   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:49.190571   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:51.673549   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:53.681472   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:55.691231   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:53:58.180642   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:00.199809   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:02.688554   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:05.175020   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:07.196729   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:09.689447   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:12.246497   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:14.682369   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:16.669640   10160 pod_ready.go:81] duration metric: took 4m0.0144367s for pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace to be "Ready" ...
	E0513 23:54:16.669695   10160 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0513 23:54:16.669811   10160 pod_ready.go:38] duration metric: took 5m27.1899064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:54:16.669953   10160 api_server.go:52] waiting for apiserver process to appear ...
	I0513 23:54:16.683601   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 23:54:16.736849   10160 logs.go:276] 2 containers: [f57f199bd05f e6e4cb552a6a]
	I0513 23:54:16.757406   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 23:54:16.839051   10160 logs.go:276] 2 containers: [39664a48c5f4 85fd9289e443]
	I0513 23:54:16.858352   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 23:54:16.922542   10160 logs.go:276] 2 containers: [106035bb9094 916ecbe3fd9a]
	I0513 23:54:16.942787   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 23:54:17.003116   10160 logs.go:276] 2 containers: [60bccc7c5567 007b44664263]
	I0513 23:54:17.018136   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 23:54:17.116066   10160 logs.go:276] 2 containers: [bb60d8e76c76 9aa55abdebde]
	I0513 23:54:17.139016   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 23:54:17.210023   10160 logs.go:276] 2 containers: [2641bad5af28 24863edca480]
	I0513 23:54:17.229165   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 23:54:17.435193   10160 logs.go:276] 0 containers: []
	W0513 23:54:17.435365   10160 logs.go:278] No container was found matching "kindnet"
	I0513 23:54:17.462565   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0513 23:54:17.586615   10160 logs.go:276] 1 containers: [2053fa9b003e]
	I0513 23:54:17.599011   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 23:54:17.660277   10160 logs.go:276] 2 containers: [ff0f3609f076 2a5a3383f4d3]
	I0513 23:54:17.660277   10160 logs.go:123] Gathering logs for coredns [916ecbe3fd9a] ...
	I0513 23:54:17.660277   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916ecbe3fd9a"
	I0513 23:54:17.749805   10160 logs.go:123] Gathering logs for kube-scheduler [60bccc7c5567] ...
	I0513 23:54:17.749865   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60bccc7c5567"
	I0513 23:54:17.816401   10160 logs.go:123] Gathering logs for kube-scheduler [007b44664263] ...
	I0513 23:54:17.816401   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007b44664263"
	I0513 23:54:17.915128   10160 logs.go:123] Gathering logs for kube-proxy [9aa55abdebde] ...
	I0513 23:54:17.915239   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa55abdebde"
	I0513 23:54:18.030352   10160 logs.go:123] Gathering logs for kube-controller-manager [24863edca480] ...
	I0513 23:54:18.030930   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24863edca480"
	I0513 23:54:18.145365   10160 logs.go:123] Gathering logs for kubelet ...
	I0513 23:54:18.145365   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0513 23:54:18.332857   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.163787    1664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:18.335116   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.165308    1664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z4cmv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z4cmv" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:18.335952   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168117    1664 reflector.go:138] object-"default"/"default-token-trqms": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-trqms" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:18.336580   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168185    1664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:18.337104   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168230    1664 reflector.go:138] object-"kube-system"/"coredns-token-42crg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-42crg" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:18.341324   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.369371    1664 reflector.go:138] object-"kube-system"/"kube-proxy-token-kxxbw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kxxbw" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:18.342040   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.372416    1664 reflector.go:138] object-"kube-system"/"metrics-server-token-wgntv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-wgntv" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:18.352915   10160 logs.go:138] Found kubelet problem: May 13 23:48:55 old-k8s-version-873100 kubelet[1664]: E0513 23:48:55.555361    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:18.355806   10160 logs.go:138] Found kubelet problem: May 13 23:48:56 old-k8s-version-873100 kubelet[1664]: E0513 23:48:56.818771    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.357141   10160 logs.go:138] Found kubelet problem: May 13 23:48:57 old-k8s-version-873100 kubelet[1664]: E0513 23:48:57.878580    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.360869   10160 logs.go:138] Found kubelet problem: May 13 23:49:09 old-k8s-version-873100 kubelet[1664]: E0513 23:49:09.628842    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:18.367105   10160 logs.go:138] Found kubelet problem: May 13 23:49:17 old-k8s-version-873100 kubelet[1664]: E0513 23:49:17.525796    1664 pod_workers.go:191] Error syncing pod 06586d62-3ad3-4fbc-aacc-246d964ee67a ("storage-provisioner_kube-system(06586d62-3ad3-4fbc-aacc-246d964ee67a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(06586d62-3ad3-4fbc-aacc-246d964ee67a)"
	W0513 23:54:18.367514   10160 logs.go:138] Found kubelet problem: May 13 23:49:21 old-k8s-version-873100 kubelet[1664]: E0513 23:49:21.594051    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.370731   10160 logs.go:138] Found kubelet problem: May 13 23:49:35 old-k8s-version-873100 kubelet[1664]: E0513 23:49:35.291592    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:18.373632   10160 logs.go:138] Found kubelet problem: May 13 23:49:35 old-k8s-version-873100 kubelet[1664]: E0513 23:49:35.677781    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:18.373632   10160 logs.go:138] Found kubelet problem: May 13 23:49:36 old-k8s-version-873100 kubelet[1664]: E0513 23:49:36.004593    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.375881   10160 logs.go:138] Found kubelet problem: May 13 23:49:47 old-k8s-version-873100 kubelet[1664]: E0513 23:49:47.579186    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.380062   10160 logs.go:138] Found kubelet problem: May 13 23:49:49 old-k8s-version-873100 kubelet[1664]: E0513 23:49:49.108018    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:18.380120   10160 logs.go:138] Found kubelet problem: May 13 23:49:58 old-k8s-version-873100 kubelet[1664]: E0513 23:49:58.572594    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.380120   10160 logs.go:138] Found kubelet problem: May 13 23:50:00 old-k8s-version-873100 kubelet[1664]: E0513 23:50:00.572911    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.380120   10160 logs.go:138] Found kubelet problem: May 13 23:50:11 old-k8s-version-873100 kubelet[1664]: E0513 23:50:11.574387    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.385226   10160 logs.go:138] Found kubelet problem: May 13 23:50:13 old-k8s-version-873100 kubelet[1664]: E0513 23:50:13.053232    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:18.385226   10160 logs.go:138] Found kubelet problem: May 13 23:50:24 old-k8s-version-873100 kubelet[1664]: E0513 23:50:24.590995    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.388702   10160 logs.go:138] Found kubelet problem: May 13 23:50:26 old-k8s-version-873100 kubelet[1664]: E0513 23:50:26.623478    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:18.389680   10160 logs.go:138] Found kubelet problem: May 13 23:50:36 old-k8s-version-873100 kubelet[1664]: E0513 23:50:36.566159    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.390186   10160 logs.go:138] Found kubelet problem: May 13 23:50:40 old-k8s-version-873100 kubelet[1664]: E0513 23:50:40.566224    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.390514   10160 logs.go:138] Found kubelet problem: May 13 23:50:47 old-k8s-version-873100 kubelet[1664]: E0513 23:50:47.567340    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.390514   10160 logs.go:138] Found kubelet problem: May 13 23:50:53 old-k8s-version-873100 kubelet[1664]: E0513 23:50:53.567507    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.395351   10160 logs.go:138] Found kubelet problem: May 13 23:51:01 old-k8s-version-873100 kubelet[1664]: E0513 23:51:01.128714    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:18.395351   10160 logs.go:138] Found kubelet problem: May 13 23:51:06 old-k8s-version-873100 kubelet[1664]: E0513 23:51:06.565035    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.395351   10160 logs.go:138] Found kubelet problem: May 13 23:51:12 old-k8s-version-873100 kubelet[1664]: E0513 23:51:12.567471    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.396517   10160 logs.go:138] Found kubelet problem: May 13 23:51:20 old-k8s-version-873100 kubelet[1664]: E0513 23:51:20.575231    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.396667   10160 logs.go:138] Found kubelet problem: May 13 23:51:26 old-k8s-version-873100 kubelet[1664]: E0513 23:51:26.592110    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.397450   10160 logs.go:138] Found kubelet problem: May 13 23:51:35 old-k8s-version-873100 kubelet[1664]: E0513 23:51:35.569043    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.397450   10160 logs.go:138] Found kubelet problem: May 13 23:51:38 old-k8s-version-873100 kubelet[1664]: E0513 23:51:38.572143    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.401041   10160 logs.go:138] Found kubelet problem: May 13 23:51:49 old-k8s-version-873100 kubelet[1664]: E0513 23:51:49.657257    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:18.401579   10160 logs.go:138] Found kubelet problem: May 13 23:51:53 old-k8s-version-873100 kubelet[1664]: E0513 23:51:53.564886    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.402042   10160 logs.go:138] Found kubelet problem: May 13 23:52:02 old-k8s-version-873100 kubelet[1664]: E0513 23:52:02.564687    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.402578   10160 logs.go:138] Found kubelet problem: May 13 23:52:06 old-k8s-version-873100 kubelet[1664]: E0513 23:52:06.558154    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.403092   10160 logs.go:138] Found kubelet problem: May 13 23:52:17 old-k8s-version-873100 kubelet[1664]: E0513 23:52:17.564379    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.403206   10160 logs.go:138] Found kubelet problem: May 13 23:52:18 old-k8s-version-873100 kubelet[1664]: E0513 23:52:18.560064    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.403206   10160 logs.go:138] Found kubelet problem: May 13 23:52:31 old-k8s-version-873100 kubelet[1664]: E0513 23:52:31.564002    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.407350   10160 logs.go:138] Found kubelet problem: May 13 23:52:32 old-k8s-version-873100 kubelet[1664]: E0513 23:52:32.268190    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:18.407350   10160 logs.go:138] Found kubelet problem: May 13 23:52:45 old-k8s-version-873100 kubelet[1664]: E0513 23:52:45.559043    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.408049   10160 logs.go:138] Found kubelet problem: May 13 23:52:45 old-k8s-version-873100 kubelet[1664]: E0513 23:52:45.559125    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.408049   10160 logs.go:138] Found kubelet problem: May 13 23:52:58 old-k8s-version-873100 kubelet[1664]: E0513 23:52:58.564922    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.408685   10160 logs.go:138] Found kubelet problem: May 13 23:52:58 old-k8s-version-873100 kubelet[1664]: E0513 23:52:58.567539    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.408685   10160 logs.go:138] Found kubelet problem: May 13 23:53:11 old-k8s-version-873100 kubelet[1664]: E0513 23:53:11.552039    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.409216   10160 logs.go:138] Found kubelet problem: May 13 23:53:12 old-k8s-version-873100 kubelet[1664]: E0513 23:53:12.548582    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.409271   10160 logs.go:138] Found kubelet problem: May 13 23:53:25 old-k8s-version-873100 kubelet[1664]: E0513 23:53:25.555702    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.409271   10160 logs.go:138] Found kubelet problem: May 13 23:53:27 old-k8s-version-873100 kubelet[1664]: E0513 23:53:27.548228    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.410022   10160 logs.go:138] Found kubelet problem: May 13 23:53:38 old-k8s-version-873100 kubelet[1664]: E0513 23:53:38.570162    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.410585   10160 logs.go:138] Found kubelet problem: May 13 23:53:39 old-k8s-version-873100 kubelet[1664]: E0513 23:53:39.542579    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.410585   10160 logs.go:138] Found kubelet problem: May 13 23:53:52 old-k8s-version-873100 kubelet[1664]: E0513 23:53:52.541470    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.410585   10160 logs.go:138] Found kubelet problem: May 13 23:53:53 old-k8s-version-873100 kubelet[1664]: E0513 23:53:53.542320    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.411742   10160 logs.go:138] Found kubelet problem: May 13 23:54:03 old-k8s-version-873100 kubelet[1664]: E0513 23:54:03.543124    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.411967   10160 logs.go:138] Found kubelet problem: May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:18.412386   10160 logs.go:138] Found kubelet problem: May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0513 23:54:18.412386   10160 logs.go:123] Gathering logs for dmesg ...
	I0513 23:54:18.412386   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 23:54:18.475072   10160 logs.go:123] Gathering logs for kube-apiserver [e6e4cb552a6a] ...
	I0513 23:54:18.475610   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e4cb552a6a"
	I0513 23:54:18.710634   10160 logs.go:123] Gathering logs for etcd [39664a48c5f4] ...
	I0513 23:54:18.710683   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39664a48c5f4"
	I0513 23:54:18.822814   10160 logs.go:123] Gathering logs for kube-proxy [bb60d8e76c76] ...
	I0513 23:54:18.822814   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb60d8e76c76"
	I0513 23:54:18.955928   10160 logs.go:123] Gathering logs for storage-provisioner [ff0f3609f076] ...
	I0513 23:54:18.955985   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0f3609f076"
	I0513 23:54:19.040601   10160 logs.go:123] Gathering logs for Docker ...
	I0513 23:54:19.040601   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 23:54:19.141355   10160 logs.go:123] Gathering logs for container status ...
	I0513 23:54:19.141417   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 23:54:19.291066   10160 logs.go:123] Gathering logs for describe nodes ...
	I0513 23:54:19.291106   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 23:54:19.922108   10160 logs.go:123] Gathering logs for kube-apiserver [f57f199bd05f] ...
	I0513 23:54:19.922108   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57f199bd05f"
	I0513 23:54:20.056530   10160 logs.go:123] Gathering logs for coredns [106035bb9094] ...
	I0513 23:54:20.057083   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106035bb9094"
	I0513 23:54:20.135797   10160 logs.go:123] Gathering logs for kube-controller-manager [2641bad5af28] ...
	I0513 23:54:20.135797   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2641bad5af28"
	I0513 23:54:20.266613   10160 logs.go:123] Gathering logs for storage-provisioner [2a5a3383f4d3] ...
	I0513 23:54:20.266756   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a5a3383f4d3"
	I0513 23:54:20.375112   10160 logs.go:123] Gathering logs for etcd [85fd9289e443] ...
	I0513 23:54:20.375226   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85fd9289e443"
	I0513 23:54:20.493051   10160 logs.go:123] Gathering logs for kubernetes-dashboard [2053fa9b003e] ...
	I0513 23:54:20.494201   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2053fa9b003e"
	I0513 23:54:20.632848   10160 out.go:304] Setting ErrFile to fd 1776...
	I0513 23:54:20.632848   10160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0513 23:54:20.632848   10160 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0513 23:54:20.632848   10160 out.go:239]   May 13 23:53:52 old-k8s-version-873100 kubelet[1664]: E0513 23:53:52.541470    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 13 23:53:52 old-k8s-version-873100 kubelet[1664]: E0513 23:53:52.541470    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:20.632848   10160 out.go:239]   May 13 23:53:53 old-k8s-version-873100 kubelet[1664]: E0513 23:53:53.542320    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 13 23:53:53 old-k8s-version-873100 kubelet[1664]: E0513 23:53:53.542320    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:20.632848   10160 out.go:239]   May 13 23:54:03 old-k8s-version-873100 kubelet[1664]: E0513 23:54:03.543124    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 13 23:54:03 old-k8s-version-873100 kubelet[1664]: E0513 23:54:03.543124    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:20.632848   10160 out.go:239]   May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:20.632848   10160 out.go:239]   May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0513 23:54:20.633395   10160 out.go:304] Setting ErrFile to fd 1776...
	I0513 23:54:20.633558   10160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:54:30.680737   10160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:54:30.714447   10160 api_server.go:72] duration metric: took 5m54.7092528s to wait for apiserver process to appear ...
	I0513 23:54:30.714566   10160 api_server.go:88] waiting for apiserver healthz status ...
	I0513 23:54:30.732074   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 23:54:30.832490   10160 logs.go:276] 2 containers: [f57f199bd05f e6e4cb552a6a]
	I0513 23:54:30.851585   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 23:54:30.942751   10160 logs.go:276] 2 containers: [39664a48c5f4 85fd9289e443]
	I0513 23:54:30.955651   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 23:54:31.022442   10160 logs.go:276] 2 containers: [106035bb9094 916ecbe3fd9a]
	I0513 23:54:31.037663   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 23:54:31.099190   10160 logs.go:276] 2 containers: [60bccc7c5567 007b44664263]
	I0513 23:54:31.112045   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 23:54:31.179413   10160 logs.go:276] 2 containers: [bb60d8e76c76 9aa55abdebde]
	I0513 23:54:31.193205   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 23:54:31.253031   10160 logs.go:276] 2 containers: [2641bad5af28 24863edca480]
	I0513 23:54:31.274331   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 23:54:31.345427   10160 logs.go:276] 0 containers: []
	W0513 23:54:31.345427   10160 logs.go:278] No container was found matching "kindnet"
	I0513 23:54:31.367214   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 23:54:31.441396   10160 logs.go:276] 2 containers: [ff0f3609f076 2a5a3383f4d3]
	I0513 23:54:31.456382   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0513 23:54:31.533000   10160 logs.go:276] 1 containers: [2053fa9b003e]
	I0513 23:54:31.533000   10160 logs.go:123] Gathering logs for etcd [39664a48c5f4] ...
	I0513 23:54:31.533000   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39664a48c5f4"
	I0513 23:54:31.641269   10160 logs.go:123] Gathering logs for coredns [916ecbe3fd9a] ...
	I0513 23:54:31.641269   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916ecbe3fd9a"
	I0513 23:54:31.711274   10160 logs.go:123] Gathering logs for kube-controller-manager [24863edca480] ...
	I0513 23:54:31.711430   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24863edca480"
	I0513 23:54:31.824043   10160 logs.go:123] Gathering logs for dmesg ...
	I0513 23:54:31.824043   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 23:54:31.863959   10160 logs.go:123] Gathering logs for kube-apiserver [f57f199bd05f] ...
	I0513 23:54:31.864070   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57f199bd05f"
	I0513 23:54:31.988619   10160 logs.go:123] Gathering logs for kube-scheduler [60bccc7c5567] ...
	I0513 23:54:31.988711   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60bccc7c5567"
	I0513 23:54:32.056673   10160 logs.go:123] Gathering logs for kube-scheduler [007b44664263] ...
	I0513 23:54:32.056734   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007b44664263"
	I0513 23:54:32.132892   10160 logs.go:123] Gathering logs for kube-proxy [9aa55abdebde] ...
	I0513 23:54:32.132892   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa55abdebde"
	I0513 23:54:32.207259   10160 logs.go:123] Gathering logs for storage-provisioner [ff0f3609f076] ...
	I0513 23:54:32.207374   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0f3609f076"
	I0513 23:54:32.297440   10160 logs.go:123] Gathering logs for kubernetes-dashboard [2053fa9b003e] ...
	I0513 23:54:32.297440   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2053fa9b003e"
	I0513 23:54:32.384073   10160 logs.go:123] Gathering logs for Docker ...
	I0513 23:54:32.384073   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 23:54:32.484253   10160 logs.go:123] Gathering logs for describe nodes ...
	I0513 23:54:32.484367   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 23:54:32.855664   10160 logs.go:123] Gathering logs for kube-apiserver [e6e4cb552a6a] ...
	I0513 23:54:32.855728   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e4cb552a6a"
	I0513 23:54:33.035474   10160 logs.go:123] Gathering logs for container status ...
	I0513 23:54:33.035474   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 23:54:33.164611   10160 logs.go:123] Gathering logs for kube-controller-manager [2641bad5af28] ...
	I0513 23:54:33.164662   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2641bad5af28"
	I0513 23:54:33.266456   10160 logs.go:123] Gathering logs for storage-provisioner [2a5a3383f4d3] ...
	I0513 23:54:33.266456   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a5a3383f4d3"
	I0513 23:54:33.416575   10160 logs.go:123] Gathering logs for coredns [106035bb9094] ...
	I0513 23:54:33.416653   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106035bb9094"
	I0513 23:54:33.516663   10160 logs.go:123] Gathering logs for kube-proxy [bb60d8e76c76] ...
	I0513 23:54:33.516663   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb60d8e76c76"
	I0513 23:54:33.599596   10160 logs.go:123] Gathering logs for kubelet ...
	I0513 23:54:33.599861   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0513 23:54:33.733642   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.163787    1664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.734802   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.165308    1664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z4cmv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z4cmv" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.736263   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168117    1664 reflector.go:138] object-"default"/"default-token-trqms": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-trqms" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.736990   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168185    1664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.736990   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168230    1664 reflector.go:138] object-"kube-system"/"coredns-token-42crg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-42crg" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.742587   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.369371    1664 reflector.go:138] object-"kube-system"/"kube-proxy-token-kxxbw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kxxbw" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.743306   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.372416    1664 reflector.go:138] object-"kube-system"/"metrics-server-token-wgntv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-wgntv" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.753735   10160 logs.go:138] Found kubelet problem: May 13 23:48:55 old-k8s-version-873100 kubelet[1664]: E0513 23:48:55.555361    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.756017   10160 logs.go:138] Found kubelet problem: May 13 23:48:56 old-k8s-version-873100 kubelet[1664]: E0513 23:48:56.818771    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.757054   10160 logs.go:138] Found kubelet problem: May 13 23:48:57 old-k8s-version-873100 kubelet[1664]: E0513 23:48:57.878580    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.761215   10160 logs.go:138] Found kubelet problem: May 13 23:49:09 old-k8s-version-873100 kubelet[1664]: E0513 23:49:09.628842    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.765261   10160 logs.go:138] Found kubelet problem: May 13 23:49:17 old-k8s-version-873100 kubelet[1664]: E0513 23:49:17.525796    1664 pod_workers.go:191] Error syncing pod 06586d62-3ad3-4fbc-aacc-246d964ee67a ("storage-provisioner_kube-system(06586d62-3ad3-4fbc-aacc-246d964ee67a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(06586d62-3ad3-4fbc-aacc-246d964ee67a)"
	W0513 23:54:33.765546   10160 logs.go:138] Found kubelet problem: May 13 23:49:21 old-k8s-version-873100 kubelet[1664]: E0513 23:49:21.594051    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.769483   10160 logs.go:138] Found kubelet problem: May 13 23:49:35 old-k8s-version-873100 kubelet[1664]: E0513 23:49:35.291592    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.773559   10160 logs.go:138] Found kubelet problem: May 13 23:49:35 old-k8s-version-873100 kubelet[1664]: E0513 23:49:35.677781    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.773559   10160 logs.go:138] Found kubelet problem: May 13 23:49:36 old-k8s-version-873100 kubelet[1664]: E0513 23:49:36.004593    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.774270   10160 logs.go:138] Found kubelet problem: May 13 23:49:47 old-k8s-version-873100 kubelet[1664]: E0513 23:49:47.579186    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.776534   10160 logs.go:138] Found kubelet problem: May 13 23:49:49 old-k8s-version-873100 kubelet[1664]: E0513 23:49:49.108018    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.779046   10160 logs.go:138] Found kubelet problem: May 13 23:49:58 old-k8s-version-873100 kubelet[1664]: E0513 23:49:58.572594    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.779272   10160 logs.go:138] Found kubelet problem: May 13 23:50:00 old-k8s-version-873100 kubelet[1664]: E0513 23:50:00.572911    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.779625   10160 logs.go:138] Found kubelet problem: May 13 23:50:11 old-k8s-version-873100 kubelet[1664]: E0513 23:50:11.574387    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.782429   10160 logs.go:138] Found kubelet problem: May 13 23:50:13 old-k8s-version-873100 kubelet[1664]: E0513 23:50:13.053232    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.782483   10160 logs.go:138] Found kubelet problem: May 13 23:50:24 old-k8s-version-873100 kubelet[1664]: E0513 23:50:24.590995    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.786719   10160 logs.go:138] Found kubelet problem: May 13 23:50:26 old-k8s-version-873100 kubelet[1664]: E0513 23:50:26.623478    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.787269   10160 logs.go:138] Found kubelet problem: May 13 23:50:36 old-k8s-version-873100 kubelet[1664]: E0513 23:50:36.566159    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.787642   10160 logs.go:138] Found kubelet problem: May 13 23:50:40 old-k8s-version-873100 kubelet[1664]: E0513 23:50:40.566224    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.788214   10160 logs.go:138] Found kubelet problem: May 13 23:50:47 old-k8s-version-873100 kubelet[1664]: E0513 23:50:47.567340    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.788708   10160 logs.go:138] Found kubelet problem: May 13 23:50:53 old-k8s-version-873100 kubelet[1664]: E0513 23:50:53.567507    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.793435   10160 logs.go:138] Found kubelet problem: May 13 23:51:01 old-k8s-version-873100 kubelet[1664]: E0513 23:51:01.128714    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.794126   10160 logs.go:138] Found kubelet problem: May 13 23:51:06 old-k8s-version-873100 kubelet[1664]: E0513 23:51:06.565035    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.794126   10160 logs.go:138] Found kubelet problem: May 13 23:51:12 old-k8s-version-873100 kubelet[1664]: E0513 23:51:12.567471    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.794906   10160 logs.go:138] Found kubelet problem: May 13 23:51:20 old-k8s-version-873100 kubelet[1664]: E0513 23:51:20.575231    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.795312   10160 logs.go:138] Found kubelet problem: May 13 23:51:26 old-k8s-version-873100 kubelet[1664]: E0513 23:51:26.592110    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.795577   10160 logs.go:138] Found kubelet problem: May 13 23:51:35 old-k8s-version-873100 kubelet[1664]: E0513 23:51:35.569043    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.795577   10160 logs.go:138] Found kubelet problem: May 13 23:51:38 old-k8s-version-873100 kubelet[1664]: E0513 23:51:38.572143    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.796802   10160 logs.go:138] Found kubelet problem: May 13 23:51:49 old-k8s-version-873100 kubelet[1664]: E0513 23:51:49.657257    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.796802   10160 logs.go:138] Found kubelet problem: May 13 23:51:53 old-k8s-version-873100 kubelet[1664]: E0513 23:51:53.564886    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.799442   10160 logs.go:138] Found kubelet problem: May 13 23:52:02 old-k8s-version-873100 kubelet[1664]: E0513 23:52:02.564687    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.799872   10160 logs.go:138] Found kubelet problem: May 13 23:52:06 old-k8s-version-873100 kubelet[1664]: E0513 23:52:06.558154    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.800172   10160 logs.go:138] Found kubelet problem: May 13 23:52:17 old-k8s-version-873100 kubelet[1664]: E0513 23:52:17.564379    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.800392   10160 logs.go:138] Found kubelet problem: May 13 23:52:18 old-k8s-version-873100 kubelet[1664]: E0513 23:52:18.560064    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.801032   10160 logs.go:138] Found kubelet problem: May 13 23:52:31 old-k8s-version-873100 kubelet[1664]: E0513 23:52:31.564002    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.802838   10160 logs.go:138] Found kubelet problem: May 13 23:52:32 old-k8s-version-873100 kubelet[1664]: E0513 23:52:32.268190    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.802838   10160 logs.go:138] Found kubelet problem: May 13 23:52:45 old-k8s-version-873100 kubelet[1664]: E0513 23:52:45.559043    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.802838   10160 logs.go:138] Found kubelet problem: May 13 23:52:45 old-k8s-version-873100 kubelet[1664]: E0513 23:52:45.559125    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.802838   10160 logs.go:138] Found kubelet problem: May 13 23:52:58 old-k8s-version-873100 kubelet[1664]: E0513 23:52:58.564922    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.804927   10160 logs.go:138] Found kubelet problem: May 13 23:52:58 old-k8s-version-873100 kubelet[1664]: E0513 23:52:58.567539    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.804927   10160 logs.go:138] Found kubelet problem: May 13 23:53:11 old-k8s-version-873100 kubelet[1664]: E0513 23:53:11.552039    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.805654   10160 logs.go:138] Found kubelet problem: May 13 23:53:12 old-k8s-version-873100 kubelet[1664]: E0513 23:53:12.548582    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.805984   10160 logs.go:138] Found kubelet problem: May 13 23:53:25 old-k8s-version-873100 kubelet[1664]: E0513 23:53:25.555702    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.806266   10160 logs.go:138] Found kubelet problem: May 13 23:53:27 old-k8s-version-873100 kubelet[1664]: E0513 23:53:27.548228    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.806507   10160 logs.go:138] Found kubelet problem: May 13 23:53:38 old-k8s-version-873100 kubelet[1664]: E0513 23:53:38.570162    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.807044   10160 logs.go:138] Found kubelet problem: May 13 23:53:39 old-k8s-version-873100 kubelet[1664]: E0513 23:53:39.542579    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.807435   10160 logs.go:138] Found kubelet problem: May 13 23:53:52 old-k8s-version-873100 kubelet[1664]: E0513 23:53:52.541470    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.807773   10160 logs.go:138] Found kubelet problem: May 13 23:53:53 old-k8s-version-873100 kubelet[1664]: E0513 23:53:53.542320    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.808440   10160 logs.go:138] Found kubelet problem: May 13 23:54:03 old-k8s-version-873100 kubelet[1664]: E0513 23:54:03.543124    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.808692   10160 logs.go:138] Found kubelet problem: May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.809108   10160 logs.go:138] Found kubelet problem: May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.809486   10160 logs.go:138] Found kubelet problem: May 13 23:54:19 old-k8s-version-873100 kubelet[1664]: E0513 23:54:19.539558    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.810110   10160 logs.go:138] Found kubelet problem: May 13 23:54:27 old-k8s-version-873100 kubelet[1664]: E0513 23:54:27.537151    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.810481   10160 logs.go:138] Found kubelet problem: May 13 23:54:30 old-k8s-version-873100 kubelet[1664]: E0513 23:54:30.545414    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0513 23:54:33.810481   10160 logs.go:123] Gathering logs for etcd [85fd9289e443] ...
	I0513 23:54:33.810481   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85fd9289e443"
	I0513 23:54:33.895482   10160 out.go:304] Setting ErrFile to fd 1776...
	I0513 23:54:33.896017   10160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0513 23:54:33.896156   10160 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:19 old-k8s-version-873100 kubelet[1664]: E0513 23:54:19.539558    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 13 23:54:19 old-k8s-version-873100 kubelet[1664]: E0513 23:54:19.539558    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:27 old-k8s-version-873100 kubelet[1664]: E0513 23:54:27.537151    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 13 23:54:27 old-k8s-version-873100 kubelet[1664]: E0513 23:54:27.537151    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:30 old-k8s-version-873100 kubelet[1664]: E0513 23:54:30.545414    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 13 23:54:30 old-k8s-version-873100 kubelet[1664]: E0513 23:54:30.545414    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0513 23:54:33.896204   10160 out.go:304] Setting ErrFile to fd 1776...
	I0513 23:54:33.897292   10160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:54:43.920327   10160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56669/healthz ...
	I0513 23:54:45.939394   10160 api_server.go:279] https://127.0.0.1:56669/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0513 23:54:46.038133   10160 api_server.go:103] status: https://127.0.0.1:56669/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0513 23:54:46.240977   10160 out.go:177] 
	W0513 23:54:46.437367   10160 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0513 23:54:46.437367   10160 out.go:239] * 
	* 
	W0513 23:54:46.439781   10160 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 23:54:46.643226   10160 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-873100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-873100
helpers_test.go:235: (dbg) docker inspect old-k8s-version-873100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "435662456c56477fbacf0c8596964b6ff458caa40ac8fc1f0667d4ba3b7c3180",
	        "Created": "2024-05-13T23:43:42.820470816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299746,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-13T23:47:53.126576028Z",
	            "FinishedAt": "2024-05-13T23:47:48.684445053Z"
	        },
	        "Image": "sha256:5a6e59a9bdc0d32876fd51e3702c6cb16f38b145ed5528e5f0bfb1de21e70803",
	        "ResolvConfPath": "/var/lib/docker/containers/435662456c56477fbacf0c8596964b6ff458caa40ac8fc1f0667d4ba3b7c3180/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/435662456c56477fbacf0c8596964b6ff458caa40ac8fc1f0667d4ba3b7c3180/hostname",
	        "HostsPath": "/var/lib/docker/containers/435662456c56477fbacf0c8596964b6ff458caa40ac8fc1f0667d4ba3b7c3180/hosts",
	        "LogPath": "/var/lib/docker/containers/435662456c56477fbacf0c8596964b6ff458caa40ac8fc1f0667d4ba3b7c3180/435662456c56477fbacf0c8596964b6ff458caa40ac8fc1f0667d4ba3b7c3180-json.log",
	        "Name": "/old-k8s-version-873100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-873100:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-873100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/291ba32859a6125a60add132cfdbfb656d9fdffc9fe6c3c00ff02ac94a1fb6c9-init/diff:/var/lib/docker/overlay2/e3065cc89db7a8fd6915450a1724667534193c4a9eb8348f67381d1430bd11e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/291ba32859a6125a60add132cfdbfb656d9fdffc9fe6c3c00ff02ac94a1fb6c9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/291ba32859a6125a60add132cfdbfb656d9fdffc9fe6c3c00ff02ac94a1fb6c9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/291ba32859a6125a60add132cfdbfb656d9fdffc9fe6c3c00ff02ac94a1fb6c9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-873100",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-873100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-873100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-873100",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-873100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2662ff7a5f8800ca614f3fe72eeb92206296be485983851fe391ba3c0bef3596",
	            "SandboxKey": "/var/run/docker/netns/2662ff7a5f88",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56671"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56672"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56673"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56669"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-873100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "NetworkID": "d75864af93ca557631b5ccc9ea702f0d83d2b1cf2696482a7cb978e3098d5632",
	                    "EndpointID": "7df06e7ac5d305038adc5385c0629f2c471a2d805abddc2b2d98533d7dc59976",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-873100",
	                        "435662456c56"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-873100 -n old-k8s-version-873100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-873100 -n old-k8s-version-873100: (2.0649182s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-873100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-873100 logs -n 25: (7.2575384s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p auto-589900 sudo crictl                           | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | pods                                                 |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo crictl ps                        | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | --all                                                |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo find                             | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                   |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo ip a s                           | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	| delete  | -p newest-cni-949100                                 | newest-cni-949100 | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	| ssh     | -p auto-589900 sudo ip r s                           | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	| start   | -p calico-589900 --memory=3072                       | calico-589900     | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                   |                   |         |                     |                     |
	|         | --wait-timeout=15m                                   |                   |                   |         |                     |                     |
	|         | --cni=calico --driver=docker                         |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo                                  | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | iptables-save                                        |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo iptables                         | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | -t nat -L -n -v                                      |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo systemctl                        | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | status kubelet --all --full                          |                   |                   |         |                     |                     |
	|         | --no-pager                                           |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo systemctl                        | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | cat kubelet --no-pager                               |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo journalctl                       | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | -xeu kubelet --all --full                            |                   |                   |         |                     |                     |
	|         | --no-pager                                           |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo cat                              | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo cat                              | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo systemctl                        | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | status docker --all --full                           |                   |                   |         |                     |                     |
	|         | --no-pager                                           |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo systemctl                        | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | cat docker --no-pager                                |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo cat                              | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | /etc/docker/daemon.json                              |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo docker                           | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | system info                                          |                   |                   |         |                     |                     |
	| ssh     | -p kindnet-589900 pgrep -a                           | kindnet-589900    | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | kubelet                                              |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo systemctl                        | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | status cri-docker --all --full                       |                   |                   |         |                     |                     |
	|         | --no-pager                                           |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo systemctl                        | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | cat cri-docker --no-pager                            |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo cat                              | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo cat                              | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo                                  | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC | 13 May 24 23:54 UTC |
	|         | cri-dockerd --version                                |                   |                   |         |                     |                     |
	| ssh     | -p auto-589900 sudo systemctl                        | auto-589900       | minikube4\jenkins | v1.33.1 | 13 May 24 23:54 UTC |                     |
	|         | status containerd --all --full                       |                   |                   |         |                     |                     |
	|         | --no-pager                                           |                   |                   |         |                     |                     |
	|---------|------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 23:54:23
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 23:54:23.590926    7828 out.go:291] Setting OutFile to fd 1916 ...
	I0513 23:54:23.591408    7828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:54:23.591408    7828 out.go:304] Setting ErrFile to fd 1988...
	I0513 23:54:23.591408    7828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:54:23.630122    7828 out.go:298] Setting JSON to false
	I0513 23:54:23.633815    7828 start.go:129] hostinfo: {"hostname":"minikube4","uptime":10701,"bootTime":1715633761,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 23:54:23.634012    7828 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 23:54:23.641946    7828 out.go:177] * [calico-589900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 23:54:23.647980    7828 notify.go:220] Checking for updates...
	I0513 23:54:23.649676    7828 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 23:54:23.655540    7828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 23:54:23.660211    7828 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 23:54:23.665594    7828 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 23:54:23.671691    7828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 23:54:19.491868   14756 pod_ready.go:102] pod "coredns-7db6d8ff4d-mgqj4" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:21.776015   14756 pod_ready.go:102] pod "coredns-7db6d8ff4d-mgqj4" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:23.675804    7828 config.go:182] Loaded profile config "auto-589900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:54:23.676581    7828 config.go:182] Loaded profile config "kindnet-589900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:54:23.676581    7828 config.go:182] Loaded profile config "old-k8s-version-873100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 23:54:23.677162    7828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 23:54:24.060342    7828 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 23:54:24.080304    7828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 23:54:24.552407    7828 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:97 SystemTime:2024-05-13 23:54:24.49230371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 23:54:24.557909    7828 out.go:177] * Using the docker driver based on user configuration
	I0513 23:54:24.562744    7828 start.go:297] selected driver: docker
	I0513 23:54:24.562744    7828 start.go:901] validating driver "docker" against <nil>
	I0513 23:54:24.562744    7828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 23:54:24.671009    7828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 23:54:25.137600    7828 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:97 SystemTime:2024-05-13 23:54:25.077303145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 23:54:25.138170    7828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 23:54:25.140978    7828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:54:25.146648    7828 out.go:177] * Using Docker Desktop driver with root privileges
	I0513 23:54:25.149509    7828 cni.go:84] Creating CNI manager for "calico"
	I0513 23:54:25.149509    7828 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0513 23:54:25.149509    7828 start.go:340] cluster config:
	{Name:calico-589900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-589900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:54:25.152256    7828 out.go:177] * Starting "calico-589900" primary control-plane node in "calico-589900" cluster
	I0513 23:54:25.157892    7828 cache.go:121] Beginning downloading kic base image for docker with docker
	I0513 23:54:25.161891    7828 out.go:177] * Pulling base image v0.0.44 ...
	I0513 23:54:25.165988    7828 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:54:25.166563    7828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon
	I0513 23:54:25.166802    7828 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 23:54:25.166849    7828 cache.go:56] Caching tarball of preloaded images
	I0513 23:54:25.167479    7828 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:54:25.167695    7828 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 23:54:25.168055    7828 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-589900\config.json ...
	I0513 23:54:25.168409    7828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-589900\config.json: {Name:mkfecb1c5e8d68ed297f27dd94a400b1bfbc1a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:54:25.398575    7828 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon, skipping pull
	I0513 23:54:25.398575    7828 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e exists in daemon, skipping load
	I0513 23:54:25.398575    7828 cache.go:194] Successfully downloaded all kic artifacts
	I0513 23:54:25.398575    7828 start.go:360] acquireMachinesLock for calico-589900: {Name:mkd0a60b6e65a93e0abb91660a7aae446a80b42c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 23:54:25.399111    7828 start.go:364] duration metric: took 0s to acquireMachinesLock for "calico-589900"
	I0513 23:54:25.399368    7828 start.go:93] Provisioning new machine with config: &{Name:calico-589900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-589900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:54:25.399722    7828 start.go:125] createHost starting for "" (driver="docker")
	I0513 23:54:25.407250    7828 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0513 23:54:25.407859    7828 start.go:159] libmachine.API.Create for "calico-589900" (driver="docker")
	I0513 23:54:25.407984    7828 client.go:168] LocalClient.Create starting
	I0513 23:54:25.408187    7828 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0513 23:54:25.408187    7828 main.go:141] libmachine: Decoding PEM data...
	I0513 23:54:25.408722    7828 main.go:141] libmachine: Parsing certificate...
	I0513 23:54:25.408782    7828 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0513 23:54:25.408782    7828 main.go:141] libmachine: Decoding PEM data...
	I0513 23:54:25.408782    7828 main.go:141] libmachine: Parsing certificate...
	I0513 23:54:25.418500    7828 cli_runner.go:164] Run: docker network inspect calico-589900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0513 23:54:25.654754    7828 cli_runner.go:211] docker network inspect calico-589900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0513 23:54:25.669530    7828 network_create.go:281] running [docker network inspect calico-589900] to gather additional debugging logs...
	I0513 23:54:25.669530    7828 cli_runner.go:164] Run: docker network inspect calico-589900
	W0513 23:54:25.905081    7828 cli_runner.go:211] docker network inspect calico-589900 returned with exit code 1
	I0513 23:54:25.905081    7828 network_create.go:284] error running [docker network inspect calico-589900]: docker network inspect calico-589900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-589900 not found
	I0513 23:54:25.905081    7828 network_create.go:286] output of [docker network inspect calico-589900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-589900 not found
	
	** /stderr **
	I0513 23:54:25.920190    7828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0513 23:54:26.161466    7828 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:54:26.191886    7828 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:54:26.218184    7828 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001506c60}
	I0513 23:54:26.218184    7828 network_create.go:124] attempt to create docker network calico-589900 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0513 23:54:26.227092    7828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-589900 calico-589900
	W0513 23:54:26.432574    7828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-589900 calico-589900 returned with exit code 1
	W0513 23:54:26.432703    7828 network_create.go:149] failed to create docker network calico-589900 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-589900 calico-589900: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0513 23:54:26.432778    7828 network_create.go:116] failed to create docker network calico-589900 192.168.67.0/24, will retry: subnet is taken
	I0513 23:54:26.471935    7828 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:54:26.496145    7828 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015a0240}
	I0513 23:54:26.496145    7828 network_create.go:124] attempt to create docker network calico-589900 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0513 23:54:26.510387    7828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-589900 calico-589900
	I0513 23:54:27.014736    7828 network_create.go:108] docker network calico-589900 192.168.76.0/24 created
	I0513 23:54:27.014736    7828 kic.go:121] calculated static IP "192.168.76.2" for the "calico-589900" container
	I0513 23:54:27.034209    7828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0513 23:54:27.252875    7828 cli_runner.go:164] Run: docker volume create calico-589900 --label name.minikube.sigs.k8s.io=calico-589900 --label created_by.minikube.sigs.k8s.io=true
	I0513 23:54:27.459319    7828 oci.go:103] Successfully created a docker volume calico-589900
	I0513 23:54:27.473736    7828 cli_runner.go:164] Run: docker run --rm --name calico-589900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-589900 --entrypoint /usr/bin/test -v calico-589900:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib
	I0513 23:54:23.782851   14756 pod_ready.go:102] pod "coredns-7db6d8ff4d-mgqj4" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:26.278034   14756 pod_ready.go:102] pod "coredns-7db6d8ff4d-mgqj4" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:30.680737   10160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:54:30.714447   10160 api_server.go:72] duration metric: took 5m54.7092528s to wait for apiserver process to appear ...
	I0513 23:54:30.714566   10160 api_server.go:88] waiting for apiserver healthz status ...
	I0513 23:54:30.732074   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 23:54:30.832490   10160 logs.go:276] 2 containers: [f57f199bd05f e6e4cb552a6a]
	I0513 23:54:30.851585   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 23:54:29.412716    7828 cli_runner.go:217] Completed: docker run --rm --name calico-589900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-589900 --entrypoint /usr/bin/test -v calico-589900:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib: (1.9387222s)
	I0513 23:54:29.412795    7828 oci.go:107] Successfully prepared a docker volume calico-589900
	I0513 23:54:29.412895    7828 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:54:29.413581    7828 kic.go:194] Starting extracting preloaded images to volume ...
	I0513 23:54:29.425455    7828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-589900:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -I lz4 -xf /preloaded.tar -C /extractDir
	I0513 23:54:28.768070   14756 pod_ready.go:102] pod "coredns-7db6d8ff4d-mgqj4" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:30.777501   14756 pod_ready.go:102] pod "coredns-7db6d8ff4d-mgqj4" in "kube-system" namespace has status "Ready":"False"
	I0513 23:54:32.853950   14756 pod_ready.go:92] pod "coredns-7db6d8ff4d-mgqj4" in "kube-system" namespace has status "Ready":"True"
	I0513 23:54:32.854131   14756 pod_ready.go:81] duration metric: took 29.1130861s for pod "coredns-7db6d8ff4d-mgqj4" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:32.854219   14756 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-sfmq8" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:32.913184   14756 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-sfmq8" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-sfmq8" not found
	I0513 23:54:32.913184   14756 pod_ready.go:81] duration metric: took 58.9621ms for pod "coredns-7db6d8ff4d-sfmq8" in "kube-system" namespace to be "Ready" ...
	E0513 23:54:32.913184   14756 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-sfmq8" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-sfmq8" not found
	I0513 23:54:32.913184   14756 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-589900" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:32.962081   14756 pod_ready.go:92] pod "etcd-kindnet-589900" in "kube-system" namespace has status "Ready":"True"
	I0513 23:54:32.962081   14756 pod_ready.go:81] duration metric: took 48.8949ms for pod "etcd-kindnet-589900" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:32.962081   14756 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-589900" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:32.984242   14756 pod_ready.go:92] pod "kube-apiserver-kindnet-589900" in "kube-system" namespace has status "Ready":"True"
	I0513 23:54:32.984298   14756 pod_ready.go:81] duration metric: took 22.2158ms for pod "kube-apiserver-kindnet-589900" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:32.984363   14756 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-589900" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:33.010629   14756 pod_ready.go:92] pod "kube-controller-manager-kindnet-589900" in "kube-system" namespace has status "Ready":"True"
	I0513 23:54:33.010629   14756 pod_ready.go:81] duration metric: took 26.2648ms for pod "kube-controller-manager-kindnet-589900" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:33.010629   14756 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-xwrz4" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:33.036243   14756 pod_ready.go:92] pod "kube-proxy-xwrz4" in "kube-system" namespace has status "Ready":"True"
	I0513 23:54:33.036243   14756 pod_ready.go:81] duration metric: took 25.612ms for pod "kube-proxy-xwrz4" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:33.036243   14756 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-589900" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:33.374554   14756 pod_ready.go:92] pod "kube-scheduler-kindnet-589900" in "kube-system" namespace has status "Ready":"True"
	I0513 23:54:33.374554   14756 pod_ready.go:81] duration metric: took 338.2958ms for pod "kube-scheduler-kindnet-589900" in "kube-system" namespace to be "Ready" ...
	I0513 23:54:33.374638   14756 pod_ready.go:38] duration metric: took 29.7486159s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:54:33.374710   14756 api_server.go:52] waiting for apiserver process to appear ...
	I0513 23:54:33.400398   14756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:54:33.457410   14756 api_server.go:72] duration metric: took 32.9026736s to wait for apiserver process to appear ...
	I0513 23:54:33.457410   14756 api_server.go:88] waiting for apiserver healthz status ...
	I0513 23:54:33.457410   14756 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56927/healthz ...
	I0513 23:54:33.489543   14756 api_server.go:279] https://127.0.0.1:56927/healthz returned 200:
	ok
	I0513 23:54:33.497117   14756 api_server.go:141] control plane version: v1.30.0
	I0513 23:54:33.497218   14756 api_server.go:131] duration metric: took 39.7617ms to wait for apiserver health ...
	I0513 23:54:33.497218   14756 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 23:54:33.611116   14756 system_pods.go:59] 8 kube-system pods found
	I0513 23:54:33.611669   14756 system_pods.go:61] "coredns-7db6d8ff4d-mgqj4" [751750f1-1795-4557-ae1f-920cea824b85] Running
	I0513 23:54:33.613205   14756 system_pods.go:61] "etcd-kindnet-589900" [1585d89a-e033-4041-87a4-4e1bfabb1b86] Running
	I0513 23:54:33.613205   14756 system_pods.go:61] "kindnet-wnl6s" [0e0ca8c1-951f-437a-b347-bad9fa18feb3] Running
	I0513 23:54:33.613294   14756 system_pods.go:61] "kube-apiserver-kindnet-589900" [9b56d3ee-c599-4c62-a893-64741045348f] Running
	I0513 23:54:33.613294   14756 system_pods.go:61] "kube-controller-manager-kindnet-589900" [63dcd29b-67b7-4d88-9147-f41eaa866961] Running
	I0513 23:54:33.613294   14756 system_pods.go:61] "kube-proxy-xwrz4" [ace9a2a3-1aa1-411c-b2dd-333113e15ebc] Running
	I0513 23:54:33.613294   14756 system_pods.go:61] "kube-scheduler-kindnet-589900" [b222520d-2591-40f0-af0c-bb26f467208a] Running
	I0513 23:54:33.613294   14756 system_pods.go:61] "storage-provisioner" [00133757-b3fd-4010-bedf-9caa4073af3e] Running
	I0513 23:54:33.613372   14756 system_pods.go:74] duration metric: took 116.0711ms to wait for pod list to return data ...
	I0513 23:54:33.613372   14756 default_sa.go:34] waiting for default service account to be created ...
	I0513 23:54:33.783017   14756 default_sa.go:45] found service account: "default"
	I0513 23:54:33.783137   14756 default_sa.go:55] duration metric: took 169.7569ms for default service account to be created ...
	I0513 23:54:33.783137   14756 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 23:54:33.994168   14756 system_pods.go:86] 8 kube-system pods found
	I0513 23:54:33.994799   14756 system_pods.go:89] "coredns-7db6d8ff4d-mgqj4" [751750f1-1795-4557-ae1f-920cea824b85] Running
	I0513 23:54:33.994799   14756 system_pods.go:89] "etcd-kindnet-589900" [1585d89a-e033-4041-87a4-4e1bfabb1b86] Running
	I0513 23:54:33.994799   14756 system_pods.go:89] "kindnet-wnl6s" [0e0ca8c1-951f-437a-b347-bad9fa18feb3] Running
	I0513 23:54:33.994870   14756 system_pods.go:89] "kube-apiserver-kindnet-589900" [9b56d3ee-c599-4c62-a893-64741045348f] Running
	I0513 23:54:33.994870   14756 system_pods.go:89] "kube-controller-manager-kindnet-589900" [63dcd29b-67b7-4d88-9147-f41eaa866961] Running
	I0513 23:54:33.994924   14756 system_pods.go:89] "kube-proxy-xwrz4" [ace9a2a3-1aa1-411c-b2dd-333113e15ebc] Running
	I0513 23:54:33.994924   14756 system_pods.go:89] "kube-scheduler-kindnet-589900" [b222520d-2591-40f0-af0c-bb26f467208a] Running
	I0513 23:54:33.994924   14756 system_pods.go:89] "storage-provisioner" [00133757-b3fd-4010-bedf-9caa4073af3e] Running
	I0513 23:54:33.994982   14756 system_pods.go:126] duration metric: took 211.7235ms to wait for k8s-apps to be running ...
	I0513 23:54:33.994982   14756 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 23:54:34.016703   14756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:54:34.061026   14756 system_svc.go:56] duration metric: took 66.0407ms WaitForService to wait for kubelet
	I0513 23:54:34.061026   14756 kubeadm.go:576] duration metric: took 33.506262s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:54:34.061026   14756 node_conditions.go:102] verifying NodePressure condition ...
	I0513 23:54:34.177389   14756 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0513 23:54:34.177469   14756 node_conditions.go:123] node cpu capacity is 16
	I0513 23:54:34.177538   14756 node_conditions.go:105] duration metric: took 116.5066ms to run NodePressure ...
	I0513 23:54:34.177538   14756 start.go:240] waiting for startup goroutines ...
	I0513 23:54:34.177616   14756 start.go:245] waiting for cluster config update ...
	I0513 23:54:34.177616   14756 start.go:254] writing updated cluster config ...
	I0513 23:54:34.200027   14756 ssh_runner.go:195] Run: rm -f paused
	I0513 23:54:34.388435   14756 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0513 23:54:34.391026   14756 out.go:177] * Done! kubectl is now configured to use "kindnet-589900" cluster and "default" namespace by default
	I0513 23:54:30.942751   10160 logs.go:276] 2 containers: [39664a48c5f4 85fd9289e443]
	I0513 23:54:30.955651   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 23:54:31.022442   10160 logs.go:276] 2 containers: [106035bb9094 916ecbe3fd9a]
	I0513 23:54:31.037663   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 23:54:31.099190   10160 logs.go:276] 2 containers: [60bccc7c5567 007b44664263]
	I0513 23:54:31.112045   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 23:54:31.179413   10160 logs.go:276] 2 containers: [bb60d8e76c76 9aa55abdebde]
	I0513 23:54:31.193205   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 23:54:31.253031   10160 logs.go:276] 2 containers: [2641bad5af28 24863edca480]
	I0513 23:54:31.274331   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 23:54:31.345427   10160 logs.go:276] 0 containers: []
	W0513 23:54:31.345427   10160 logs.go:278] No container was found matching "kindnet"
	I0513 23:54:31.367214   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 23:54:31.441396   10160 logs.go:276] 2 containers: [ff0f3609f076 2a5a3383f4d3]
	I0513 23:54:31.456382   10160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0513 23:54:31.533000   10160 logs.go:276] 1 containers: [2053fa9b003e]
	I0513 23:54:31.533000   10160 logs.go:123] Gathering logs for etcd [39664a48c5f4] ...
	I0513 23:54:31.533000   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39664a48c5f4"
	I0513 23:54:31.641269   10160 logs.go:123] Gathering logs for coredns [916ecbe3fd9a] ...
	I0513 23:54:31.641269   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916ecbe3fd9a"
	I0513 23:54:31.711274   10160 logs.go:123] Gathering logs for kube-controller-manager [24863edca480] ...
	I0513 23:54:31.711430   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24863edca480"
	I0513 23:54:31.824043   10160 logs.go:123] Gathering logs for dmesg ...
	I0513 23:54:31.824043   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 23:54:31.863959   10160 logs.go:123] Gathering logs for kube-apiserver [f57f199bd05f] ...
	I0513 23:54:31.864070   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57f199bd05f"
	I0513 23:54:31.988619   10160 logs.go:123] Gathering logs for kube-scheduler [60bccc7c5567] ...
	I0513 23:54:31.988711   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60bccc7c5567"
	I0513 23:54:32.056673   10160 logs.go:123] Gathering logs for kube-scheduler [007b44664263] ...
	I0513 23:54:32.056734   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 007b44664263"
	I0513 23:54:32.132892   10160 logs.go:123] Gathering logs for kube-proxy [9aa55abdebde] ...
	I0513 23:54:32.132892   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa55abdebde"
	I0513 23:54:32.207259   10160 logs.go:123] Gathering logs for storage-provisioner [ff0f3609f076] ...
	I0513 23:54:32.207374   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0f3609f076"
	I0513 23:54:32.297440   10160 logs.go:123] Gathering logs for kubernetes-dashboard [2053fa9b003e] ...
	I0513 23:54:32.297440   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2053fa9b003e"
	I0513 23:54:32.384073   10160 logs.go:123] Gathering logs for Docker ...
	I0513 23:54:32.384073   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 23:54:32.484253   10160 logs.go:123] Gathering logs for describe nodes ...
	I0513 23:54:32.484367   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 23:54:32.855664   10160 logs.go:123] Gathering logs for kube-apiserver [e6e4cb552a6a] ...
	I0513 23:54:32.855728   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e4cb552a6a"
	I0513 23:54:33.035474   10160 logs.go:123] Gathering logs for container status ...
	I0513 23:54:33.035474   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 23:54:33.164611   10160 logs.go:123] Gathering logs for kube-controller-manager [2641bad5af28] ...
	I0513 23:54:33.164662   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2641bad5af28"
	I0513 23:54:33.266456   10160 logs.go:123] Gathering logs for storage-provisioner [2a5a3383f4d3] ...
	I0513 23:54:33.266456   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a5a3383f4d3"
	I0513 23:54:33.416575   10160 logs.go:123] Gathering logs for coredns [106035bb9094] ...
	I0513 23:54:33.416653   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106035bb9094"
	I0513 23:54:33.516663   10160 logs.go:123] Gathering logs for kube-proxy [bb60d8e76c76] ...
	I0513 23:54:33.516663   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb60d8e76c76"
	I0513 23:54:33.599596   10160 logs.go:123] Gathering logs for kubelet ...
	I0513 23:54:33.599861   10160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0513 23:54:33.733642   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.163787    1664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.734802   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.165308    1664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-z4cmv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-z4cmv" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.736263   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168117    1664 reflector.go:138] object-"default"/"default-token-trqms": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-trqms" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.736990   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168185    1664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.736990   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.168230    1664 reflector.go:138] object-"kube-system"/"coredns-token-42crg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-42crg" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.742587   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.369371    1664 reflector.go:138] object-"kube-system"/"kube-proxy-token-kxxbw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kxxbw" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.743306   10160 logs.go:138] Found kubelet problem: May 13 23:48:49 old-k8s-version-873100 kubelet[1664]: E0513 23:48:49.372416    1664 reflector.go:138] object-"kube-system"/"metrics-server-token-wgntv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-wgntv" is forbidden: User "system:node:old-k8s-version-873100" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-873100' and this object
	W0513 23:54:33.753735   10160 logs.go:138] Found kubelet problem: May 13 23:48:55 old-k8s-version-873100 kubelet[1664]: E0513 23:48:55.555361    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.756017   10160 logs.go:138] Found kubelet problem: May 13 23:48:56 old-k8s-version-873100 kubelet[1664]: E0513 23:48:56.818771    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.757054   10160 logs.go:138] Found kubelet problem: May 13 23:48:57 old-k8s-version-873100 kubelet[1664]: E0513 23:48:57.878580    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.761215   10160 logs.go:138] Found kubelet problem: May 13 23:49:09 old-k8s-version-873100 kubelet[1664]: E0513 23:49:09.628842    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.765261   10160 logs.go:138] Found kubelet problem: May 13 23:49:17 old-k8s-version-873100 kubelet[1664]: E0513 23:49:17.525796    1664 pod_workers.go:191] Error syncing pod 06586d62-3ad3-4fbc-aacc-246d964ee67a ("storage-provisioner_kube-system(06586d62-3ad3-4fbc-aacc-246d964ee67a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(06586d62-3ad3-4fbc-aacc-246d964ee67a)"
	W0513 23:54:33.765546   10160 logs.go:138] Found kubelet problem: May 13 23:49:21 old-k8s-version-873100 kubelet[1664]: E0513 23:49:21.594051    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.769483   10160 logs.go:138] Found kubelet problem: May 13 23:49:35 old-k8s-version-873100 kubelet[1664]: E0513 23:49:35.291592    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.773559   10160 logs.go:138] Found kubelet problem: May 13 23:49:35 old-k8s-version-873100 kubelet[1664]: E0513 23:49:35.677781    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.773559   10160 logs.go:138] Found kubelet problem: May 13 23:49:36 old-k8s-version-873100 kubelet[1664]: E0513 23:49:36.004593    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.774270   10160 logs.go:138] Found kubelet problem: May 13 23:49:47 old-k8s-version-873100 kubelet[1664]: E0513 23:49:47.579186    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.776534   10160 logs.go:138] Found kubelet problem: May 13 23:49:49 old-k8s-version-873100 kubelet[1664]: E0513 23:49:49.108018    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.779046   10160 logs.go:138] Found kubelet problem: May 13 23:49:58 old-k8s-version-873100 kubelet[1664]: E0513 23:49:58.572594    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.779272   10160 logs.go:138] Found kubelet problem: May 13 23:50:00 old-k8s-version-873100 kubelet[1664]: E0513 23:50:00.572911    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.779625   10160 logs.go:138] Found kubelet problem: May 13 23:50:11 old-k8s-version-873100 kubelet[1664]: E0513 23:50:11.574387    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.782429   10160 logs.go:138] Found kubelet problem: May 13 23:50:13 old-k8s-version-873100 kubelet[1664]: E0513 23:50:13.053232    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.782483   10160 logs.go:138] Found kubelet problem: May 13 23:50:24 old-k8s-version-873100 kubelet[1664]: E0513 23:50:24.590995    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.786719   10160 logs.go:138] Found kubelet problem: May 13 23:50:26 old-k8s-version-873100 kubelet[1664]: E0513 23:50:26.623478    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.787269   10160 logs.go:138] Found kubelet problem: May 13 23:50:36 old-k8s-version-873100 kubelet[1664]: E0513 23:50:36.566159    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.787642   10160 logs.go:138] Found kubelet problem: May 13 23:50:40 old-k8s-version-873100 kubelet[1664]: E0513 23:50:40.566224    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.788214   10160 logs.go:138] Found kubelet problem: May 13 23:50:47 old-k8s-version-873100 kubelet[1664]: E0513 23:50:47.567340    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.788708   10160 logs.go:138] Found kubelet problem: May 13 23:50:53 old-k8s-version-873100 kubelet[1664]: E0513 23:50:53.567507    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.793435   10160 logs.go:138] Found kubelet problem: May 13 23:51:01 old-k8s-version-873100 kubelet[1664]: E0513 23:51:01.128714    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.794126   10160 logs.go:138] Found kubelet problem: May 13 23:51:06 old-k8s-version-873100 kubelet[1664]: E0513 23:51:06.565035    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.794126   10160 logs.go:138] Found kubelet problem: May 13 23:51:12 old-k8s-version-873100 kubelet[1664]: E0513 23:51:12.567471    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.794906   10160 logs.go:138] Found kubelet problem: May 13 23:51:20 old-k8s-version-873100 kubelet[1664]: E0513 23:51:20.575231    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.795312   10160 logs.go:138] Found kubelet problem: May 13 23:51:26 old-k8s-version-873100 kubelet[1664]: E0513 23:51:26.592110    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.795577   10160 logs.go:138] Found kubelet problem: May 13 23:51:35 old-k8s-version-873100 kubelet[1664]: E0513 23:51:35.569043    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.795577   10160 logs.go:138] Found kubelet problem: May 13 23:51:38 old-k8s-version-873100 kubelet[1664]: E0513 23:51:38.572143    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.796802   10160 logs.go:138] Found kubelet problem: May 13 23:51:49 old-k8s-version-873100 kubelet[1664]: E0513 23:51:49.657257    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0513 23:54:33.796802   10160 logs.go:138] Found kubelet problem: May 13 23:51:53 old-k8s-version-873100 kubelet[1664]: E0513 23:51:53.564886    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.799442   10160 logs.go:138] Found kubelet problem: May 13 23:52:02 old-k8s-version-873100 kubelet[1664]: E0513 23:52:02.564687    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.799872   10160 logs.go:138] Found kubelet problem: May 13 23:52:06 old-k8s-version-873100 kubelet[1664]: E0513 23:52:06.558154    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.800172   10160 logs.go:138] Found kubelet problem: May 13 23:52:17 old-k8s-version-873100 kubelet[1664]: E0513 23:52:17.564379    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.800392   10160 logs.go:138] Found kubelet problem: May 13 23:52:18 old-k8s-version-873100 kubelet[1664]: E0513 23:52:18.560064    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.801032   10160 logs.go:138] Found kubelet problem: May 13 23:52:31 old-k8s-version-873100 kubelet[1664]: E0513 23:52:31.564002    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.802838   10160 logs.go:138] Found kubelet problem: May 13 23:52:32 old-k8s-version-873100 kubelet[1664]: E0513 23:52:32.268190    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0513 23:54:33.802838   10160 logs.go:138] Found kubelet problem: May 13 23:52:45 old-k8s-version-873100 kubelet[1664]: E0513 23:52:45.559043    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.802838   10160 logs.go:138] Found kubelet problem: May 13 23:52:45 old-k8s-version-873100 kubelet[1664]: E0513 23:52:45.559125    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.802838   10160 logs.go:138] Found kubelet problem: May 13 23:52:58 old-k8s-version-873100 kubelet[1664]: E0513 23:52:58.564922    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.804927   10160 logs.go:138] Found kubelet problem: May 13 23:52:58 old-k8s-version-873100 kubelet[1664]: E0513 23:52:58.567539    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.804927   10160 logs.go:138] Found kubelet problem: May 13 23:53:11 old-k8s-version-873100 kubelet[1664]: E0513 23:53:11.552039    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.805654   10160 logs.go:138] Found kubelet problem: May 13 23:53:12 old-k8s-version-873100 kubelet[1664]: E0513 23:53:12.548582    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.805984   10160 logs.go:138] Found kubelet problem: May 13 23:53:25 old-k8s-version-873100 kubelet[1664]: E0513 23:53:25.555702    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.806266   10160 logs.go:138] Found kubelet problem: May 13 23:53:27 old-k8s-version-873100 kubelet[1664]: E0513 23:53:27.548228    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.806507   10160 logs.go:138] Found kubelet problem: May 13 23:53:38 old-k8s-version-873100 kubelet[1664]: E0513 23:53:38.570162    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.807044   10160 logs.go:138] Found kubelet problem: May 13 23:53:39 old-k8s-version-873100 kubelet[1664]: E0513 23:53:39.542579    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.807435   10160 logs.go:138] Found kubelet problem: May 13 23:53:52 old-k8s-version-873100 kubelet[1664]: E0513 23:53:52.541470    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.807773   10160 logs.go:138] Found kubelet problem: May 13 23:53:53 old-k8s-version-873100 kubelet[1664]: E0513 23:53:53.542320    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.808440   10160 logs.go:138] Found kubelet problem: May 13 23:54:03 old-k8s-version-873100 kubelet[1664]: E0513 23:54:03.543124    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.808692   10160 logs.go:138] Found kubelet problem: May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.809108   10160 logs.go:138] Found kubelet problem: May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.809486   10160 logs.go:138] Found kubelet problem: May 13 23:54:19 old-k8s-version-873100 kubelet[1664]: E0513 23:54:19.539558    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.810110   10160 logs.go:138] Found kubelet problem: May 13 23:54:27 old-k8s-version-873100 kubelet[1664]: E0513 23:54:27.537151    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.810481   10160 logs.go:138] Found kubelet problem: May 13 23:54:30 old-k8s-version-873100 kubelet[1664]: E0513 23:54:30.545414    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0513 23:54:33.810481   10160 logs.go:123] Gathering logs for etcd [85fd9289e443] ...
	I0513 23:54:33.810481   10160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85fd9289e443"
	I0513 23:54:33.895482   10160 out.go:304] Setting ErrFile to fd 1776...
	I0513 23:54:33.896017   10160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0513 23:54:33.896156   10160 out.go:239] X Problems detected in kubelet:
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:19 old-k8s-version-873100 kubelet[1664]: E0513 23:54:19.539558    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:27 old-k8s-version-873100 kubelet[1664]: E0513 23:54:27.537151    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0513 23:54:33.896204   10160 out.go:239]   May 13 23:54:30 old-k8s-version-873100 kubelet[1664]: E0513 23:54:30.545414    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0513 23:54:33.896204   10160 out.go:304] Setting ErrFile to fd 1776...
	I0513 23:54:33.897292   10160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:54:43.920327   10160 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56669/healthz ...
	I0513 23:54:45.939394   10160 api_server.go:279] https://127.0.0.1:56669/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0513 23:54:46.038133   10160 api_server.go:103] status: https://127.0.0.1:56669/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0513 23:54:46.240977   10160 out.go:177] 
	W0513 23:54:46.437367   10160 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0513 23:54:46.437367   10160 out.go:239] * 
	W0513 23:54:46.439781   10160 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 23:54:46.643226   10160 out.go:177] 
	
	
	==> Docker <==
	May 13 23:54:19 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:20 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:20 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:20 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:20 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:20 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:20 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:20 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:31 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:31 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:31 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:31 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:32 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:32 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:32 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:32 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:32 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:32 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:33 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:33 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:33 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:33 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:33 old-k8s-version-873100 dockerd[1311]: 2024/05/13 23:54:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:54:41 old-k8s-version-873100 dockerd[1311]: time="2024-05-13T23:54:41.606915280Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=c8c389ab961a53ea traceID=70254033c7e8610e27eaeff8d161eef3
	May 13 23:54:41 old-k8s-version-873100 dockerd[1311]: time="2024-05-13T23:54:41.607029007Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=c8c389ab961a53ea traceID=70254033c7e8610e27eaeff8d161eef3
	May 13 23:54:41 old-k8s-version-873100 dockerd[1311]: time="2024-05-13T23:54:41.621528708Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=c8c389ab961a53ea traceID=70254033c7e8610e27eaeff8d161eef3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2053fa9b003e3       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   38711e1d4b8aa       kubernetes-dashboard-cd95d586-s2jfh
	ff0f3609f076c       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   74252520494c9       storage-provisioner
	106035bb90940       bfe3a36ebd252                                                                                         5 minutes ago       Running             coredns                   1                   9e2a44cb8bf12       coredns-74ff55c5b-wn76j
	bb60d8e76c760       10cc881966cfd                                                                                         5 minutes ago       Running             kube-proxy                1                   aba457fe34414       kube-proxy-ndb7h
	b44a7f22c8df4       56cc512116c8f                                                                                         6 minutes ago       Running             busybox                   1                   1470e2c7fbd68       busybox
	2a5a3383f4d3c       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   74252520494c9       storage-provisioner
	60bccc7c5567c       3138b6e3d4712                                                                                         6 minutes ago       Running             kube-scheduler            1                   a8d0a74ddaa96       kube-scheduler-old-k8s-version-873100
	2641bad5af28f       b9fa1895dcaa6                                                                                         6 minutes ago       Running             kube-controller-manager   1                   5f852db41c0e6       kube-controller-manager-old-k8s-version-873100
	39664a48c5f4a       0369cf4303ffd                                                                                         6 minutes ago       Running             etcd                      1                   cb257c91ccbeb       etcd-old-k8s-version-873100
	f57f199bd05fc       ca9843d3b5454                                                                                         6 minutes ago       Running             kube-apiserver            1                   a767b8c4f0491       kube-apiserver-old-k8s-version-873100
	94078157a84b7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              busybox                   0                   e2b1a64e48078       busybox
	916ecbe3fd9a5       bfe3a36ebd252                                                                                         9 minutes ago       Exited              coredns                   0                   2f686c5038dba       coredns-74ff55c5b-wn76j
	9aa55abdebdec       10cc881966cfd                                                                                         9 minutes ago       Exited              kube-proxy                0                   a585347e1b8bf       kube-proxy-ndb7h
	24863edca4809       b9fa1895dcaa6                                                                                         9 minutes ago       Exited              kube-controller-manager   0                   3e6ea7e1be86c       kube-controller-manager-old-k8s-version-873100
	007b44664263e       3138b6e3d4712                                                                                         9 minutes ago       Exited              kube-scheduler            0                   7eaebeba68d08       kube-scheduler-old-k8s-version-873100
	e6e4cb552a6a6       ca9843d3b5454                                                                                         9 minutes ago       Exited              kube-apiserver            0                   f6fc4870ebbfc       kube-apiserver-old-k8s-version-873100
	85fd9289e4434       0369cf4303ffd                                                                                         9 minutes ago       Exited              etcd                      0                   486cd3c5a9007       etcd-old-k8s-version-873100
	
	
	==> coredns [106035bb9094] <==
	I0513 23:49:17.161995       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-13 23:48:56.090335175 +0000 UTC m=+0.092396010) (total time: 21.073845764s):
	Trace[2019727887]: [21.073845764s] [21.073845764s] END
	E0513 23:49:17.162146       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0513 23:49:17.162230       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-13 23:48:56.090361879 +0000 UTC m=+0.092422714) (total time: 21.074225023s):
	Trace[1427131847]: [21.074225023s] [21.074225023s] END
	E0513 23:49:17.162248       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0513 23:49:17.162424       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-13 23:48:56.090337776 +0000 UTC m=+0.092398611) (total time: 21.074491164s):
	Trace[911902081]: [21.074491164s] [21.074491164s] END
	E0513 23:49:17.162710       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:33188 - 64535 "HINFO IN 688096134417006916.7420273124118181206. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.033733897s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [916ecbe3fd9a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	[INFO] Reloading complete
	[INFO] 127.0.0.1:58059 - 12279 "HINFO IN 7869154775025884049.997924229727814573. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.034699134s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-873100
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-873100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=old-k8s-version-873100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T23_45_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-873100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 23:54:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 23:50:11 +0000   Mon, 13 May 2024 23:45:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 23:50:11 +0000   Mon, 13 May 2024 23:45:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 23:50:11 +0000   Mon, 13 May 2024 23:45:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 23:50:11 +0000   Mon, 13 May 2024 23:45:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-873100
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 3033e70467e248f7b7dc0192f03e8f49
	  System UUID:                3033e70467e248f7b7dc0192f03e8f49
	  Boot ID:                    e642bd6d-2f44-4251-bc65-a922b73ecc4a
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.1
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 coredns-74ff55c5b-wn76j                           100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     9m24s
	  kube-system                 etcd-old-k8s-version-873100                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-apiserver-old-k8s-version-873100             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-controller-manager-old-k8s-version-873100    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-proxy-ndb7h                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-scheduler-old-k8s-version-873100             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 metrics-server-9975d5f86-q9ldg                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-5j5w9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-s2jfh               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  9m59s (x6 over 10m)    kubelet     Node old-k8s-version-873100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s (x5 over 10m)    kubelet     Node old-k8s-version-873100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x5 over 10m)    kubelet     Node old-k8s-version-873100 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m39s                  kubelet     Node old-k8s-version-873100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s                  kubelet     Node old-k8s-version-873100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s                  kubelet     Node old-k8s-version-873100 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m38s                  kubelet     Node old-k8s-version-873100 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m28s                  kubelet     Node old-k8s-version-873100 status is now: NodeReady
	  Normal  Starting                 9m19s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m20s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s (x8 over 6m20s)  kubelet     Node old-k8s-version-873100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s (x8 over 6m20s)  kubelet     Node old-k8s-version-873100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s (x7 over 6m20s)  kubelet     Node old-k8s-version-873100 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m59s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[May13 23:36] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [39664a48c5f4] <==
	2024-05-13 23:54:39.017311 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:54:47.142529 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (2.42743443s) to execute
	2024-05-13 23:54:47.142818 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (3.204819405s) to execute
	2024-05-13 23:54:47.142968 W | etcdserver: read-only range request "key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true " with result "range_response_count:0 size:7" took too long (3.65741117s) to execute
	2024-05-13 23:54:47.143069 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (3.482866631s) to execute
	2024-05-13 23:54:47.143118 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.460138605s) to execute
	2024-05-13 23:54:47.143138 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (3.168473231s) to execute
	2024-05-13 23:54:47.143415 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-873100.17cf31661d5c62da\" " with result "range_response_count:1 size:851" took too long (1.459822229s) to execute
	2024-05-13 23:54:47.143622 W | etcdserver: read-only range request "key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true " with result "range_response_count:0 size:5" took too long (3.04093534s) to execute
	2024-05-13 23:54:47.143688 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (1.606972654s) to execute
	2024-05-13 23:54:47.143758 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1121" took too long (2.120086029s) to execute
	2024-05-13 23:54:47.143867 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (1.645237692s) to execute
	2024-05-13 23:54:47.143902 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (2.341586504s) to execute
	2024-05-13 23:54:48.080020 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (931.237218ms) to execute
	2024-05-13 23:54:48.091942 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:5" took too long (496.326823ms) to execute
	2024-05-13 23:54:48.092678 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (768.543141ms) to execute
	2024-05-13 23:54:48.093616 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (928.96297ms) to execute
	2024-05-13 23:54:48.096753 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (939.388586ms) to execute
	2024-05-13 23:54:48.382624 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (253.24668ms) to execute
	2024-05-13 23:54:48.851787 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (181.010358ms) to execute
	2024-05-13 23:54:49.013302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:54:49.880455 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (276.758845ms) to execute
	2024-05-13 23:54:49.880571 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (223.411736ms) to execute
	2024-05-13 23:54:49.880813 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true " with result "range_response_count:0 size:5" took too long (566.101431ms) to execute
	2024-05-13 23:54:50.464338 W | etcdserver: read-only range request "key:\"/registry/prioritylevelconfigurations/exempt\" " with result "range_response_count:1 size:371" took too long (338.004425ms) to execute
	
	
	==> etcd [85fd9289e443] <==
	2024-05-13 23:45:50.091947 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:46:00.088139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:46:10.079948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:46:20.081693 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:46:30.079760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:46:40.080462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:46:50.082074 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:47:00.081311 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:47:09.261680 W | wal: sync duration of 1.018246465s, expected less than 1s
	2024-05-13 23:47:09.419764 W | etcdserver: request "header:<ID:6571754011125630210 > lease_revoke:<id:5b338f74586444bc>" with result "size:29" took too long (157.720927ms) to execute
	2024-05-13 23:47:09.420136 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (843.8313ms) to execute
	2024-05-13 23:47:09.420676 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (576.345748ms) to execute
	2024-05-13 23:47:09.420743 W | etcdserver: read-only range request "key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true " with result "range_response_count:0 size:5" took too long (213.89893ms) to execute
	2024-05-13 23:47:09.420762 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (196.609905ms) to execute
	2024-05-13 23:47:10.077470 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:47:19.546941 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1120" took too long (1.189291364s) to execute
	2024-05-13 23:47:19.547066 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (972.175715ms) to execute
	2024-05-13 23:47:19.547158 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (318.442394ms) to execute
	2024-05-13 23:47:20.077000 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:47:30.076861 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-13 23:47:33.533353 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (312.390541ms) to execute
	2024-05-13 23:47:33.533741 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2218" took too long (232.68432ms) to execute
	2024-05-13 23:47:37.976355 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/05/13 23:47:38 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2024-05-13 23:47:38.070938 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> kernel <==
	 23:54:55 up  2:57,  0 users,  load average: 9.45, 8.02, 6.61
	Linux old-k8s-version-873100 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [e6e4cb552a6a] <==
	W0513 23:47:47.447787       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.450675       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.458561       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.469380       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.504431       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.535252       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.542737       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.576377       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.596818       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.607527       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.654977       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.660892       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.672026       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.677672       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.678621       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.678645       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.680151       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.701918       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.755177       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:47.998681       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:48.027635       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:48.040365       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:48.045809       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:48.072775       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0513 23:47:48.107524       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [f57f199bd05f] <==
	Trace[1847346346]: [1.649586642s] [1.649586642s] END
	I0513 23:54:47.149016       1 trace.go:205] Trace[1596209063]: "GuaranteedUpdate etcd3" type:*core.Event (13-May-2024 23:54:45.682) (total time: 1466ms):
	Trace[1596209063]: ---"initial value restored" 1461ms (23:54:00.144)
	Trace[1596209063]: [1.466188966s] [1.466188966s] END
	I0513 23:54:47.149608       1 trace.go:205] Trace[1043569382]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-old-k8s-version-873100.17cf31661d5c62da,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/af46c47,client:192.168.94.2 (13-May-2024 23:54:45.682) (total time: 1466ms):
	Trace[1043569382]: ---"About to apply patch" 1461ms (23:54:00.144)
	Trace[1043569382]: [1.466849625s] [1.466849625s] END
	I0513 23:54:48.081199       1 trace.go:205] Trace[530580870]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/af46c47,client:::1 (13-May-2024 23:54:47.148) (total time: 933ms):
	Trace[530580870]: ---"About to write a response" 932ms (23:54:00.081)
	Trace[530580870]: [933.046956ms] [933.046956ms] END
	I0513 23:54:48.082659       1 trace.go:205] Trace[1320652320]: "GuaranteedUpdate etcd3" type:*core.Endpoints (13-May-2024 23:54:47.161) (total time: 920ms):
	Trace[1320652320]: ---"Transaction committed" 920ms (23:54:00.082)
	Trace[1320652320]: [920.975641ms] [920.975641ms] END
	I0513 23:54:48.083168       1 trace.go:205] Trace[377625669]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.94.2 (13-May-2024 23:54:47.161) (total time: 921ms):
	Trace[377625669]: ---"Object stored in database" 921ms (23:54:00.082)
	Trace[377625669]: [921.650404ms] [921.650404ms] END
	I0513 23:54:48.103793       1 trace.go:205] Trace[1088646185]: "List etcd3" key:/cronjobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (13-May-2024 23:54:47.156) (total time: 947ms):
	Trace[1088646185]: [947.252787ms] [947.252787ms] END
	I0513 23:54:48.104149       1 trace.go:205] Trace[576272656]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/af46c47/system:serviceaccount:kube-system:cronjob-controller,client:192.168.94.2 (13-May-2024 23:54:47.156) (total time: 947ms):
	Trace[576272656]: ---"Listing from storage done" 947ms (23:54:00.104)
	Trace[576272656]: [947.649086ms] [947.649086ms] END
	W0513 23:54:50.027090       1 handler_proxy.go:102] no RequestInfo found in the context
	E0513 23:54:50.027216       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0513 23:54:50.027234       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [24863edca480] <==
	I0513 23:45:30.992994       1 range_allocator.go:373] Set node old-k8s-version-873100 PodCIDR to [10.244.0.0/24]
	I0513 23:45:30.994951       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0513 23:45:31.078471       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0513 23:45:31.078894       1 shared_informer.go:247] Caches are synced for attach detach 
	I0513 23:45:31.101344       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-wn76j"
	I0513 23:45:31.179061       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0513 23:45:31.202792       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-nsk4h"
	E0513 23:45:31.205720       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0513 23:45:31.278740       1 shared_informer.go:247] Caches are synced for resource quota 
	I0513 23:45:31.278791       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0513 23:45:31.279026       1 shared_informer.go:247] Caches are synced for stateful set 
	E0513 23:45:31.379955       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0513 23:45:31.389482       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0513 23:45:31.481437       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0513 23:45:31.481507       1 shared_informer.go:247] Caches are synced for resource quota 
	I0513 23:45:31.580390       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ndb7h"
	I0513 23:45:31.590506       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0513 23:45:31.678178       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0513 23:45:31.678289       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0513 23:45:31.691802       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f920dc9b-2a57-489a-952e-c1fdfa0cfcde", ResourceVersion:"251", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63851240714, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0010224a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0010224c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0010224e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001398480), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001022
500), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001022520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001022560)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011ff5c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0013b7ef8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000429420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00011a808)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0013b7f48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0513 23:45:34.401437       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0513 23:45:34.504292       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-nsk4h"
	I0513 23:47:35.878479       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0513 23:47:36.080104       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0513 23:47:36.903206       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-q9ldg"
	
	
	==> kube-controller-manager [2641bad5af28] <==
	E0513 23:50:44.250471       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0513 23:50:49.809009       1 request.go:655] Throttling request took 1.047248042s, request: GET:https://192.168.94.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0513 23:50:50.662475       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0513 23:51:14.755885       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0513 23:51:22.311607       1 request.go:655] Throttling request took 1.038951941s, request: GET:https://192.168.94.2:8443/apis/discovery.k8s.io/v1beta1?timeout=32s
	W0513 23:51:23.164968       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0513 23:51:45.258900       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0513 23:51:54.812724       1 request.go:655] Throttling request took 1.045529304s, request: GET:https://192.168.94.2:8443/apis/batch/v1beta1?timeout=32s
	W0513 23:51:55.664155       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0513 23:52:15.759487       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0513 23:52:27.314124       1 request.go:655] Throttling request took 1.045691451s, request: GET:https://192.168.94.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0513 23:52:28.169333       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0513 23:52:46.258180       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0513 23:52:59.815638       1 request.go:655] Throttling request took 1.045107758s, request: GET:https://192.168.94.2:8443/apis/extensions/v1beta1?timeout=32s
	W0513 23:53:00.677232       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0513 23:53:16.760005       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0513 23:53:32.275480       1 request.go:655] Throttling request took 1.000083509s, request: GET:https://192.168.94.2:8443/apis/autoscaling/v2beta2?timeout=32s
	W0513 23:53:33.179073       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0513 23:53:47.258471       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0513 23:54:04.820391       1 request.go:655] Throttling request took 1.047073843s, request: GET:https://192.168.94.2:8443/apis/batch/v1?timeout=32s
	W0513 23:54:05.672234       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0513 23:54:17.762265       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0513 23:54:37.317601       1 request.go:655] Throttling request took 1.047507687s, request: GET:https://192.168.94.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0513 23:54:38.170060       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0513 23:54:48.262816       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [9aa55abdebde] <==
	W0513 23:45:35.477773       1 proxier.go:651] Failed to read file /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:45:35.483320       1 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:45:35.488975       1 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:45:35.494260       1 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:45:35.500393       1 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:45:35.504631       1 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	I0513 23:45:35.720613       1 node.go:172] Successfully retrieved node IP: 192.168.94.2
	I0513 23:45:35.720809       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.94.2), assume IPv4 operation
	W0513 23:45:35.807696       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0513 23:45:35.807934       1 server_others.go:185] Using iptables Proxier.
	I0513 23:45:35.808477       1 server.go:650] Version: v1.20.0
	I0513 23:45:35.809336       1 config.go:315] Starting service config controller
	I0513 23:45:35.809349       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0513 23:45:35.809377       1 config.go:224] Starting endpoint slice config controller
	I0513 23:45:35.809385       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0513 23:45:35.910085       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0513 23:45:35.910166       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [bb60d8e76c76] <==
	W0513 23:48:55.819513       1 proxier.go:651] Failed to read file /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:48:55.824468       1 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:48:55.826811       1 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:48:55.829409       1 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:48:55.831627       1 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0513 23:48:55.854569       1 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	I0513 23:48:55.876383       1 node.go:172] Successfully retrieved node IP: 192.168.94.2
	I0513 23:48:55.876496       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.94.2), assume IPv4 operation
	W0513 23:48:55.922244       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0513 23:48:55.922558       1 server_others.go:185] Using iptables Proxier.
	I0513 23:48:55.923376       1 server.go:650] Version: v1.20.0
	I0513 23:48:55.954817       1 config.go:315] Starting service config controller
	I0513 23:48:55.954932       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0513 23:48:55.955026       1 config.go:224] Starting endpoint slice config controller
	I0513 23:48:55.955038       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0513 23:48:56.055247       1 shared_informer.go:247] Caches are synced for service config 
	I0513 23:48:56.055376       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [007b44664263] <==
	I0513 23:45:10.985527       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0513 23:45:10.985631       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0513 23:45:10.993756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0513 23:45:10.994011       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 23:45:10.994290       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 23:45:10.994439       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 23:45:10.994631       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:10.994634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 23:45:10.994692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 23:45:10.994831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0513 23:45:10.995019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 23:45:10.999431       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0513 23:45:10.999592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 23:45:10.999706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:11.878750       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 23:45:11.964789       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 23:45:11.993885       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:12.002766       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 23:45:12.048425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0513 23:45:12.100024       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 23:45:12.137537       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 23:45:12.156673       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 23:45:12.180088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 23:45:12.295943       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0513 23:45:14.185954       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [60bccc7c5567] <==
	I0513 23:48:42.183792       1 serving.go:331] Generated self-signed cert in-memory
	W0513 23:48:49.058736       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0513 23:48:49.058920       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 23:48:49.059117       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0513 23:48:49.063945       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0513 23:48:49.362880       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0513 23:48:49.363032       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 23:48:49.363046       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 23:48:49.363068       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0513 23:48:49.655367       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	May 13 23:52:58 old-k8s-version-873100 kubelet[1664]: E0513 23:52:58.564922    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:52:58 old-k8s-version-873100 kubelet[1664]: E0513 23:52:58.567539    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:53:11 old-k8s-version-873100 kubelet[1664]: E0513 23:53:11.552039    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:53:12 old-k8s-version-873100 kubelet[1664]: E0513 23:53:12.548582    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:53:25 old-k8s-version-873100 kubelet[1664]: E0513 23:53:25.555702    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:53:27 old-k8s-version-873100 kubelet[1664]: E0513 23:53:27.548228    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:53:34 old-k8s-version-873100 kubelet[1664]: W0513 23:53:34.651989    1664 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 13 23:53:34 old-k8s-version-873100 kubelet[1664]: W0513 23:53:34.653956    1664 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	May 13 23:53:38 old-k8s-version-873100 kubelet[1664]: E0513 23:53:38.570162    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:53:39 old-k8s-version-873100 kubelet[1664]: E0513 23:53:39.542579    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:53:52 old-k8s-version-873100 kubelet[1664]: E0513 23:53:52.541470    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:53:53 old-k8s-version-873100 kubelet[1664]: E0513 23:53:53.542320    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:54:03 old-k8s-version-873100 kubelet[1664]: E0513 23:54:03.543124    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:54:04 old-k8s-version-873100 kubelet[1664]: E0513 23:54:04.542127    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:54:14 old-k8s-version-873100 kubelet[1664]: E0513 23:54:14.552690    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:54:19 old-k8s-version-873100 kubelet[1664]: E0513 23:54:19.539558    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:54:27 old-k8s-version-873100 kubelet[1664]: E0513 23:54:27.537151    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:54:30 old-k8s-version-873100 kubelet[1664]: E0513 23:54:30.545414    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:54:41 old-k8s-version-873100 kubelet[1664]: E0513 23:54:41.625024    1664 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	May 13 23:54:41 old-k8s-version-873100 kubelet[1664]: E0513 23:54:41.625574    1664 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	May 13 23:54:41 old-k8s-version-873100 kubelet[1664]: E0513 23:54:41.626398    1664 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-wgntv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-q9ldg_kube-system(5e4b47
d4-363a-4078-8c3e-e66e9530b37b): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	May 13 23:54:41 old-k8s-version-873100 kubelet[1664]: E0513 23:54:41.626454    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	May 13 23:54:45 old-k8s-version-873100 kubelet[1664]: E0513 23:54:45.557203    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 13 23:54:54 old-k8s-version-873100 kubelet[1664]: E0513 23:54:54.544591    1664 pod_workers.go:191] Error syncing pod 5e4b47d4-363a-4078-8c3e-e66e9530b37b ("metrics-server-9975d5f86-q9ldg_kube-system(5e4b47d4-363a-4078-8c3e-e66e9530b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 13 23:54:56 old-k8s-version-873100 kubelet[1664]: E0513 23:54:56.544062    1664 pod_workers.go:191] Error syncing pod d296fad2-7669-4587-b6e1-7ed8f0a09436 ("dashboard-metrics-scraper-8d5bb5db8-5j5w9_kubernetes-dashboard(d296fad2-7669-4587-b6e1-7ed8f0a09436)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [2053fa9b003e] <==
	2024/05/13 23:49:35 Starting overwatch
	2024/05/13 23:49:35 Using namespace: kubernetes-dashboard
	2024/05/13 23:49:35 Using in-cluster config to connect to apiserver
	2024/05/13 23:49:35 Using secret token for csrf signing
	2024/05/13 23:49:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/05/13 23:49:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/05/13 23:49:35 Successful initial request to the apiserver, version: v1.20.0
	2024/05/13 23:49:35 Generating JWE encryption key
	2024/05/13 23:49:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/05/13 23:49:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/05/13 23:49:36 Initializing JWE encryption key from synchronized object
	2024/05/13 23:49:36 Creating in-cluster Sidecar client
	2024/05/13 23:49:36 Serving insecurely on HTTP port: 9090
	2024/05/13 23:49:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:50:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:50:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:51:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:51:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:52:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:52:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:53:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:53:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:54:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:54:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2a5a3383f4d3] <==
	I0513 23:48:55.275402       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0513 23:49:16.327049       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ff0f3609f076] <==
	I0513 23:49:33.781211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0513 23:49:33.863921       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0513 23:49:33.864053       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0513 23:49:51.543772       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0513 23:49:51.544371       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-873100_d59fe9f7-edbe-4350-8047-f90ce81b2d14!
	I0513 23:49:51.544613       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"235e3217-d22b-43dd-9f61-de0743645a64", APIVersion:"v1", ResourceVersion:"823", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-873100_d59fe9f7-edbe-4350-8047-f90ce81b2d14 became leader
	I0513 23:49:51.644892       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-873100_d59fe9f7-edbe-4350-8047-f90ce81b2d14!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:54:50.040142     796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-873100 -n old-k8s-version-873100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-873100 -n old-k8s-version-873100: (2.1494525s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-873100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E0513 23:55:00.197796   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-q9ldg dashboard-metrics-scraper-8d5bb5db8-5j5w9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-873100 describe pod metrics-server-9975d5f86-q9ldg dashboard-metrics-scraper-8d5bb5db8-5j5w9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-873100 describe pod metrics-server-9975d5f86-q9ldg dashboard-metrics-scraper-8d5bb5db8-5j5w9: exit status 1 (698.1775ms)

                                                
                                                
** stderr ** 
	E0513 23:55:01.059920    8816 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0513 23:55:01.210039    8816 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0513 23:55:01.246326    8816 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0513 23:55:01.262240    8816 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	Error from server (NotFound): pods "metrics-server-9975d5f86-q9ldg" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-5j5w9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-873100 describe pod metrics-server-9975d5f86-q9ldg dashboard-metrics-scraper-8d5bb5db8-5j5w9: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (430.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (44.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-062300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-062300 --alsologtostderr -v=1: exit status 80 (8.0543243s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-062300 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:51:35.057975   10320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 23:51:35.165714   10320 out.go:291] Setting OutFile to fd 1500 ...
	I0513 23:51:35.186536   10320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:51:35.186536   10320 out.go:304] Setting ErrFile to fd 1940...
	I0513 23:51:35.186656   10320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:51:35.213257   10320 out.go:298] Setting JSON to false
	I0513 23:51:35.213257   10320 mustload.go:65] Loading cluster: default-k8s-diff-port-062300
	I0513 23:51:35.214662   10320 config.go:182] Loaded profile config "default-k8s-diff-port-062300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:51:35.243984   10320 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-062300 --format={{.State.Status}}
	I0513 23:51:35.521728   10320 host.go:66] Checking if "default-k8s-diff-port-062300" exists ...
	I0513 23:51:35.539159   10320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-062300
	I0513 23:51:35.794502   10320 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disk
s:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.33.1/minikube-v1.33.1-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.33.1-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L m
ount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube4:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-062300 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0513 23:51:35.956383   10320 out.go:177] * Pausing node default-k8s-diff-port-062300 ... 
	I0513 23:51:35.965784   10320 host.go:66] Checking if "default-k8s-diff-port-062300" exists ...
	I0513 23:51:35.978820   10320 ssh_runner.go:195] Run: systemctl --version
	I0513 23:51:35.993689   10320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-062300
	I0513 23:51:36.180319   10320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56521 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-062300\id_rsa Username:docker}
	I0513 23:51:36.407968   10320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:51:36.461901   10320 pause.go:51] kubelet running: true
	I0513 23:51:36.476358   10320 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0513 23:51:36.813628   10320 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0513 23:51:36.867157   10320 docker.go:500] Pausing containers: [a91aebc02bd9 889b473e730e aa021e09b86a addf2cf679ae 8fcc6fb728ce 55e8f1e4d6d5 89ad7563723c d46d56ef2e5d 7109dcea2fb6 c9709ee15651 00800d2623fc 95b301adb6c0 018718677fa4 c12856199872 1b2ebfd09124 ea65c8d787cf f54040a2d27f 943fb30ac336]
	I0513 23:51:36.879454   10320 ssh_runner.go:195] Run: docker pause a91aebc02bd9 889b473e730e aa021e09b86a addf2cf679ae 8fcc6fb728ce 55e8f1e4d6d5 89ad7563723c d46d56ef2e5d 7109dcea2fb6 c9709ee15651 00800d2623fc 95b301adb6c0 018718677fa4 c12856199872 1b2ebfd09124 ea65c8d787cf f54040a2d27f 943fb30ac336
	I0513 23:51:42.613897   10320 ssh_runner.go:235] Completed: docker pause a91aebc02bd9 889b473e730e aa021e09b86a addf2cf679ae 8fcc6fb728ce 55e8f1e4d6d5 89ad7563723c d46d56ef2e5d 7109dcea2fb6 c9709ee15651 00800d2623fc 95b301adb6c0 018718677fa4 c12856199872 1b2ebfd09124 ea65c8d787cf f54040a2d27f 943fb30ac336: (5.734181s)
	I0513 23:51:42.620342   10320 out.go:177] 
	W0513 23:51:42.697042   10320 out.go:239] X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause a91aebc02bd9 889b473e730e aa021e09b86a addf2cf679ae 8fcc6fb728ce 55e8f1e4d6d5 89ad7563723c d46d56ef2e5d 7109dcea2fb6 c9709ee15651 00800d2623fc 95b301adb6c0 018718677fa4 c12856199872 1b2ebfd09124 ea65c8d787cf f54040a2d27f 943fb30ac336: Process exited with status 1
	stdout:
	a91aebc02bd9
	889b473e730e
	aa021e09b86a
	addf2cf679ae
	8fcc6fb728ce
	55e8f1e4d6d5
	89ad7563723c
	d46d56ef2e5d
	7109dcea2fb6
	c9709ee15651
	00800d2623fc
	95b301adb6c0
	c12856199872
	1b2ebfd09124
	ea65c8d787cf
	f54040a2d27f
	943fb30ac336
	
	stderr:
	Error response from daemon: cannot pause container 018718677fa4719d51c4ede331131e4149b71e760e47742a99b4ac8ead79c73f: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause a91aebc02bd9 889b473e730e aa021e09b86a addf2cf679ae 8fcc6fb728ce 55e8f1e4d6d5 89ad7563723c d46d56ef2e5d 7109dcea2fb6 c9709ee15651 00800d2623fc 95b301adb6c0 018718677fa4 c12856199872 1b2ebfd09124 ea65c8d787cf f54040a2d27f 943fb30ac336: Process exited with status 1
	stdout:
	a91aebc02bd9
	889b473e730e
	aa021e09b86a
	addf2cf679ae
	8fcc6fb728ce
	55e8f1e4d6d5
	89ad7563723c
	d46d56ef2e5d
	7109dcea2fb6
	c9709ee15651
	00800d2623fc
	95b301adb6c0
	c12856199872
	1b2ebfd09124
	ea65c8d787cf
	f54040a2d27f
	943fb30ac336
	
	stderr:
	Error response from daemon: cannot pause container 018718677fa4719d51c4ede331131e4149b71e760e47742a99b4ac8ead79c73f: OCI runtime pause failed: unable to freeze: unknown
	
	W0513 23:51:42.697042   10320 out.go:239] * 
	* 
	W0513 23:51:42.852786   10320 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_11.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 23:51:42.952729   10320 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-062300 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-062300
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-062300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb",
	        "Created": "2024-05-13T23:44:16.851168338Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 290363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-13T23:46:26.630756816Z",
	            "FinishedAt": "2024-05-13T23:46:21.989713432Z"
	        },
	        "Image": "sha256:5a6e59a9bdc0d32876fd51e3702c6cb16f38b145ed5528e5f0bfb1de21e70803",
	        "ResolvConfPath": "/var/lib/docker/containers/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb/hosts",
	        "LogPath": "/var/lib/docker/containers/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb-json.log",
	        "Name": "/default-k8s-diff-port-062300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-062300:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-062300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/115411ea20ab19e2259b2ebacc50b96e6447f04995203753fd9b555d68858dc7-init/diff:/var/lib/docker/overlay2/e3065cc89db7a8fd6915450a1724667534193c4a9eb8348f67381d1430bd11e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/115411ea20ab19e2259b2ebacc50b96e6447f04995203753fd9b555d68858dc7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/115411ea20ab19e2259b2ebacc50b96e6447f04995203753fd9b555d68858dc7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/115411ea20ab19e2259b2ebacc50b96e6447f04995203753fd9b555d68858dc7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-062300",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-062300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-062300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-062300",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-062300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7b7eac08a28f99efc319dd68156dc2a0e6fd356e15a817bf93e12ac990ebb71",
	            "SandboxKey": "/var/run/docker/netns/e7b7eac08a28",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56521"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56522"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56523"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56524"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56525"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-062300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.130.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:82:02",
	                    "NetworkID": "9836f4835f9af64b31b0bca28aacb94567d42531bf87483afa9fd5dddc13a768",
	                    "EndpointID": "4b17d6957ee6a53774e64a51314c2c5320c4f6db7fd4ecbbf834ad43e5e9c011",
	                    "Gateway": "192.168.130.1",
	                    "IPAddress": "192.168.130.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "default-k8s-diff-port-062300",
	                        "353c61b39d33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300: exit status 2 (1.9086582s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:51:43.399140    9768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-062300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-062300 logs -n 25: (14.0895668s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| stop    | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:45 UTC | 13 May 24 23:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-524600                 | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:50 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-062300  | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | default-k8s-diff-port-062300                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p no-preload-561500                  | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:51 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr                                      |                              |                   |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-062300       | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:51 UTC |
	|         | default-k8s-diff-port-062300                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-873100        | old-k8s-version-873100       | minikube4\jenkins | v1.33.1 | 13 May 24 23:47 UTC | 13 May 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p old-k8s-version-873100                              | old-k8s-version-873100       | minikube4\jenkins | v1.33.1 | 13 May 24 23:47 UTC | 13 May 24 23:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-873100             | old-k8s-version-873100       | minikube4\jenkins | v1.33.1 | 13 May 24 23:47 UTC | 13 May 24 23:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p old-k8s-version-873100                              | old-k8s-version-873100       | minikube4\jenkins | v1.33.1 | 13 May 24 23:47 UTC |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                              |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |                   |         |                     |                     |
	|         | --keep-context=false                                   |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |                   |         |                     |                     |
	| image   | embed-certs-524600 image list                          | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:50 UTC | 13 May 24 23:50 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:50 UTC | 13 May 24 23:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	| delete  | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	| start   | -p newest-cni-949100 --memory=2200 --alsologtostderr   | newest-cni-949100            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.30.0           |                              |                   |         |                     |                     |
	| image   | no-preload-561500 image list                           | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC |                     |
	| image   | default-k8s-diff-port-062300                           | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC |                     |
	|         | default-k8s-diff-port-062300                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 23:51:15
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 23:51:15.823821    7080 out.go:291] Setting OutFile to fd 1692 ...
	I0513 23:51:15.825112    7080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:51:15.825112    7080 out.go:304] Setting ErrFile to fd 1800...
	I0513 23:51:15.825112    7080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:51:15.852797    7080 out.go:298] Setting JSON to false
	I0513 23:51:15.857423    7080 start.go:129] hostinfo: {"hostname":"minikube4","uptime":10514,"bootTime":1715633761,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 23:51:15.857423    7080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 23:51:15.866434    7080 out.go:177] * [newest-cni-949100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 23:51:15.874574    7080 notify.go:220] Checking for updates...
	I0513 23:51:15.878169    7080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 23:51:15.883658    7080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 23:51:15.887945    7080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 23:51:15.891599    7080 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 23:51:15.896992    7080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 23:51:12.681026   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:15.191789   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:15.902044    7080 config.go:182] Loaded profile config "default-k8s-diff-port-062300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:51:15.902911    7080 config.go:182] Loaded profile config "no-preload-561500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:51:15.903702    7080 config.go:182] Loaded profile config "old-k8s-version-873100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 23:51:15.904031    7080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 23:51:16.278113    7080 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 23:51:16.292331    7080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 23:51:16.761534    7080 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:97 SystemTime:2024-05-13 23:51:16.713906934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 23:51:16.765881    7080 out.go:177] * Using the docker driver based on user configuration
	I0513 23:51:16.770622    7080 start.go:297] selected driver: docker
	I0513 23:51:16.770817    7080 start.go:901] validating driver "docker" against <nil>
	I0513 23:51:16.770946    7080 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 23:51:16.922050    7080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 23:51:17.384437    7080 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:97 SystemTime:2024-05-13 23:51:17.321044012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 23:51:17.384974    7080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0513 23:51:17.385101    7080 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0513 23:51:17.386605    7080 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0513 23:51:17.390687    7080 out.go:177] * Using Docker Desktop driver with root privileges
	I0513 23:51:17.393676    7080 cni.go:84] Creating CNI manager for ""
	I0513 23:51:17.393676    7080 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 23:51:17.393676    7080 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 23:51:17.393676    7080 start.go:340] cluster config:
	{Name:newest-cni-949100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-949100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:51:17.400013    7080 out.go:177] * Starting "newest-cni-949100" primary control-plane node in "newest-cni-949100" cluster
	I0513 23:51:17.403549    7080 cache.go:121] Beginning downloading kic base image for docker with docker
	I0513 23:51:17.408208    7080 out.go:177] * Pulling base image v0.0.44 ...
	I0513 23:51:17.413113    7080 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:51:17.413113    7080 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon
	I0513 23:51:17.413378    7080 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 23:51:17.413378    7080 cache.go:56] Caching tarball of preloaded images
	I0513 23:51:17.413824    7080 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:51:17.413824    7080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 23:51:17.413824    7080 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-949100\config.json ...
	I0513 23:51:17.414982    7080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-949100\config.json: {Name:mk0058a3714d9deedc014bf75c5208145b57bdbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:51:17.602793    7080 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon, skipping pull
	I0513 23:51:17.602868    7080 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e exists in daemon, skipping load
	I0513 23:51:17.602868    7080 cache.go:194] Successfully downloaded all kic artifacts
	I0513 23:51:17.602868    7080 start.go:360] acquireMachinesLock for newest-cni-949100: {Name:mkc4006f01e8e61ef3da338d7a1dfea80b3a8da3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 23:51:17.602868    7080 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-949100"
	I0513 23:51:17.602868    7080 start.go:93] Provisioning new machine with config: &{Name:newest-cni-949100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-949100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:51:17.603535    7080 start.go:125] createHost starting for "" (driver="docker")
	I0513 23:51:14.227945    8860 logs.go:276] 2 containers: [c12856199872 4cd223df9b05]
	I0513 23:51:14.240340    8860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 23:51:14.314409    8860 logs.go:276] 2 containers: [018718677fa4 d42428e694b5]
	I0513 23:51:14.330183    8860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 23:51:14.403990    8860 logs.go:276] 2 containers: [8fcc6fb728ce f8616bb48bd4]
	I0513 23:51:14.420521    8860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 23:51:14.496925    8860 logs.go:276] 2 containers: [00800d2623fc d5fef62ceb9a]
	I0513 23:51:14.515013    8860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 23:51:14.599648    8860 logs.go:276] 2 containers: [55e8f1e4d6d5 de63ce128d96]
	I0513 23:51:14.618611    8860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 23:51:14.847252    8860 logs.go:276] 2 containers: [95b301adb6c0 f2d573001dd7]
	I0513 23:51:14.862366    8860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 23:51:14.908174    8860 logs.go:276] 0 containers: []
	W0513 23:51:14.908727    8860 logs.go:278] No container was found matching "kindnet"
	I0513 23:51:14.925141    8860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0513 23:51:14.990083    8860 logs.go:276] 1 containers: [a91aebc02bd9]
	I0513 23:51:15.007393    8860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 23:51:15.088094    8860 logs.go:276] 2 containers: [889b473e730e ee111c323bdc]
	I0513 23:51:15.088216    8860 logs.go:123] Gathering logs for Docker ...
	I0513 23:51:15.088263    8860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 23:51:15.210861    8860 logs.go:123] Gathering logs for container status ...
	I0513 23:51:15.210861    8860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 23:51:15.352654    8860 logs.go:123] Gathering logs for etcd [d42428e694b5] ...
	I0513 23:51:15.352654    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d42428e694b5"
	I0513 23:51:15.431134    8860 logs.go:123] Gathering logs for coredns [f8616bb48bd4] ...
	I0513 23:51:15.431240    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8616bb48bd4"
	I0513 23:51:15.492721    8860 logs.go:123] Gathering logs for kube-scheduler [00800d2623fc] ...
	I0513 23:51:15.492721    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00800d2623fc"
	I0513 23:51:15.557445    8860 logs.go:123] Gathering logs for kube-scheduler [d5fef62ceb9a] ...
	I0513 23:51:15.558286    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fef62ceb9a"
	I0513 23:51:15.663362    8860 logs.go:123] Gathering logs for kube-proxy [55e8f1e4d6d5] ...
	I0513 23:51:15.663362    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55e8f1e4d6d5"
	I0513 23:51:15.734550    8860 logs.go:123] Gathering logs for kube-controller-manager [f2d573001dd7] ...
	I0513 23:51:15.734550    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2d573001dd7"
	I0513 23:51:15.829433    8860 logs.go:123] Gathering logs for describe nodes ...
	I0513 23:51:15.829586    8860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 23:51:16.117739    8860 logs.go:123] Gathering logs for kube-apiserver [4cd223df9b05] ...
	I0513 23:51:16.118362    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd223df9b05"
	I0513 23:51:16.308996    8860 logs.go:123] Gathering logs for kubernetes-dashboard [a91aebc02bd9] ...
	I0513 23:51:16.308996    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91aebc02bd9"
	I0513 23:51:16.385742    8860 logs.go:123] Gathering logs for coredns [8fcc6fb728ce] ...
	I0513 23:51:16.385742    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fcc6fb728ce"
	I0513 23:51:16.450143    8860 logs.go:123] Gathering logs for kube-controller-manager [95b301adb6c0] ...
	I0513 23:51:16.450143    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95b301adb6c0"
	I0513 23:51:16.700483    8860 logs.go:123] Gathering logs for storage-provisioner [889b473e730e] ...
	I0513 23:51:16.700629    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889b473e730e"
	I0513 23:51:16.793701    8860 logs.go:123] Gathering logs for kubelet ...
	I0513 23:51:16.793701    8860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 23:51:16.899346    8860 logs.go:123] Gathering logs for dmesg ...
	I0513 23:51:16.899346    8860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 23:51:16.931774    8860 logs.go:123] Gathering logs for kube-proxy [de63ce128d96] ...
	I0513 23:51:16.931853    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de63ce128d96"
	I0513 23:51:16.997946    8860 logs.go:123] Gathering logs for storage-provisioner [ee111c323bdc] ...
	I0513 23:51:16.998527    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee111c323bdc"
	I0513 23:51:17.045255    8860 logs.go:123] Gathering logs for kube-apiserver [c12856199872] ...
	I0513 23:51:17.045349    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c12856199872"
	I0513 23:51:17.241572    8860 logs.go:123] Gathering logs for etcd [018718677fa4] ...
	I0513 23:51:17.241572    8860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 018718677fa4"
	I0513 23:51:19.932617    8860 system_pods.go:59] 8 kube-system pods found
	I0513 23:51:19.932750    8860 system_pods.go:61] "coredns-7db6d8ff4d-lqgnn" [75138676-1b6b-448b-a228-e23551186474] Running
	I0513 23:51:19.932750    8860 system_pods.go:61] "etcd-default-k8s-diff-port-062300" [10c0e3b1-3e50-4396-b5e8-f7f88b4d0e8e] Running
	I0513 23:51:19.932750    8860 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-062300" [0d386ca6-74e5-4e8f-a024-a29d92afdd55] Running
	I0513 23:51:19.932750    8860 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-062300" [6660e84e-7f03-46ba-a9d9-c499b5cbbf31] Running
	I0513 23:51:19.932750    8860 system_pods.go:61] "kube-proxy-ztwp2" [fbcfc5f9-0a27-40e2-92e5-b3a87bd21490] Running
	I0513 23:51:19.932750    8860 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-062300" [37c62ddb-dfba-4e2d-8200-d1d836e867bb] Running
	I0513 23:51:19.932750    8860 system_pods.go:61] "metrics-server-569cc877fc-qgl9s" [60a7c560-5161-4bf4-9c27-0114cb776da6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0513 23:51:19.932750    8860 system_pods.go:61] "storage-provisioner" [06063d8b-9aaf-487d-b3c2-8a27f7afd683] Running
	I0513 23:51:19.932863    8860 system_pods.go:74] duration metric: took 5.7878565s to wait for pod list to return data ...
	I0513 23:51:19.932863    8860 default_sa.go:34] waiting for default service account to be created ...
	I0513 23:51:19.940283    8860 default_sa.go:45] found service account: "default"
	I0513 23:51:19.940283    8860 default_sa.go:55] duration metric: took 7.4198ms for default service account to be created ...
	I0513 23:51:19.940283    8860 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 23:51:19.957238    8860 system_pods.go:86] 8 kube-system pods found
	I0513 23:51:19.957813    8860 system_pods.go:89] "coredns-7db6d8ff4d-lqgnn" [75138676-1b6b-448b-a228-e23551186474] Running
	I0513 23:51:19.957813    8860 system_pods.go:89] "etcd-default-k8s-diff-port-062300" [10c0e3b1-3e50-4396-b5e8-f7f88b4d0e8e] Running
	I0513 23:51:19.957813    8860 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-062300" [0d386ca6-74e5-4e8f-a024-a29d92afdd55] Running
	I0513 23:51:19.957813    8860 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-062300" [6660e84e-7f03-46ba-a9d9-c499b5cbbf31] Running
	I0513 23:51:19.957813    8860 system_pods.go:89] "kube-proxy-ztwp2" [fbcfc5f9-0a27-40e2-92e5-b3a87bd21490] Running
	I0513 23:51:19.957813    8860 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-062300" [37c62ddb-dfba-4e2d-8200-d1d836e867bb] Running
	I0513 23:51:19.957813    8860 system_pods.go:89] "metrics-server-569cc877fc-qgl9s" [60a7c560-5161-4bf4-9c27-0114cb776da6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0513 23:51:19.957932    8860 system_pods.go:89] "storage-provisioner" [06063d8b-9aaf-487d-b3c2-8a27f7afd683] Running
	I0513 23:51:19.957996    8860 system_pods.go:126] duration metric: took 17.648ms to wait for k8s-apps to be running ...
	I0513 23:51:19.957996    8860 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 23:51:19.972683    8860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:51:20.000696    8860 system_svc.go:56] duration metric: took 42.6973ms WaitForService to wait for kubelet
	I0513 23:51:20.000696    8860 kubeadm.go:576] duration metric: took 4m36.5081143s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:51:20.000696    8860 node_conditions.go:102] verifying NodePressure condition ...
	I0513 23:51:20.010193    8860 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0513 23:51:20.010193    8860 node_conditions.go:123] node cpu capacity is 16
	I0513 23:51:20.010193    8860 node_conditions.go:105] duration metric: took 9.4966ms to run NodePressure ...
	I0513 23:51:20.010193    8860 start.go:240] waiting for startup goroutines ...
	I0513 23:51:20.010193    8860 start.go:245] waiting for cluster config update ...
	I0513 23:51:20.010193    8860 start.go:254] writing updated cluster config ...
	I0513 23:51:20.023798    8860 ssh_runner.go:195] Run: rm -f paused
	I0513 23:51:20.194921    8860 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0513 23:51:20.201991    8860 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-062300" cluster and "default" namespace by default
	I0513 23:51:17.614095    7080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0513 23:51:17.615295    7080 start.go:159] libmachine.API.Create for "newest-cni-949100" (driver="docker")
	I0513 23:51:17.615295    7080 client.go:168] LocalClient.Create starting
	I0513 23:51:17.615752    7080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0513 23:51:17.615752    7080 main.go:141] libmachine: Decoding PEM data...
	I0513 23:51:17.615752    7080 main.go:141] libmachine: Parsing certificate...
	I0513 23:51:17.615752    7080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0513 23:51:17.615752    7080 main.go:141] libmachine: Decoding PEM data...
	I0513 23:51:17.615752    7080 main.go:141] libmachine: Parsing certificate...
	I0513 23:51:17.625260    7080 cli_runner.go:164] Run: docker network inspect newest-cni-949100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0513 23:51:17.812726    7080 cli_runner.go:211] docker network inspect newest-cni-949100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0513 23:51:17.822099    7080 network_create.go:281] running [docker network inspect newest-cni-949100] to gather additional debugging logs...
	I0513 23:51:17.822099    7080 cli_runner.go:164] Run: docker network inspect newest-cni-949100
	W0513 23:51:17.983655    7080 cli_runner.go:211] docker network inspect newest-cni-949100 returned with exit code 1
	I0513 23:51:17.983655    7080 network_create.go:284] error running [docker network inspect newest-cni-949100]: docker network inspect newest-cni-949100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-949100 not found
	I0513 23:51:17.983655    7080 network_create.go:286] output of [docker network inspect newest-cni-949100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-949100 not found
	
	** /stderr **
	I0513 23:51:17.994564    7080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0513 23:51:18.193670    7080 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:51:18.224209    7080 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:51:18.248904    7080 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00146a3c0}
	I0513 23:51:18.248904    7080 network_create.go:124] attempt to create docker network newest-cni-949100 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0513 23:51:18.257362    7080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-949100 newest-cni-949100
	W0513 23:51:18.461395    7080 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-949100 newest-cni-949100 returned with exit code 1
	W0513 23:51:18.461395    7080 network_create.go:149] failed to create docker network newest-cni-949100 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-949100 newest-cni-949100: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0513 23:51:18.461395    7080 network_create.go:116] failed to create docker network newest-cni-949100 192.168.67.0/24, will retry: subnet is taken
	I0513 23:51:18.484846    7080 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:51:18.503503    7080 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015cc630}
	I0513 23:51:18.503567    7080 network_create.go:124] attempt to create docker network newest-cni-949100 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0513 23:51:18.513973    7080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-949100 newest-cni-949100
	I0513 23:51:18.802590    7080 network_create.go:108] docker network newest-cni-949100 192.168.76.0/24 created
	I0513 23:51:18.805272    7080 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-949100" container
	I0513 23:51:18.821748    7080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0513 23:51:19.030428    7080 cli_runner.go:164] Run: docker volume create newest-cni-949100 --label name.minikube.sigs.k8s.io=newest-cni-949100 --label created_by.minikube.sigs.k8s.io=true
	I0513 23:51:19.236647    7080 oci.go:103] Successfully created a docker volume newest-cni-949100
	I0513 23:51:19.248886    7080 cli_runner.go:164] Run: docker run --rm --name newest-cni-949100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-949100 --entrypoint /usr/bin/test -v newest-cni-949100:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib
	I0513 23:51:17.210231   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:19.686269   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:21.667198    7080 cli_runner.go:217] Completed: docker run --rm --name newest-cni-949100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-949100 --entrypoint /usr/bin/test -v newest-cni-949100:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib: (2.4182018s)
	I0513 23:51:21.667198    7080 oci.go:107] Successfully prepared a docker volume newest-cni-949100
	I0513 23:51:21.667198    7080 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:51:21.667198    7080 kic.go:194] Starting extracting preloaded images to volume ...
	I0513 23:51:21.689459    7080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-949100:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -I lz4 -xf /preloaded.tar -C /extractDir
	I0513 23:51:22.191246   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:24.682056   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:27.190295   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:29.682021   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:32.183605   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:34.677985   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:39.236415   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	
	
	==> Docker <==
	May 13 23:51:10 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:10 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:10 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:10 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:10 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:10 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:11 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:11 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:11 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:15 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:15 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:15 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:15 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:15 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:15 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:17 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:17 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:17 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:37 default-k8s-diff-port-062300 dockerd[1011]: time="2024-05-13T23:51:37.279774646Z" level=error msg="Handler for POST /v1.45/containers/018718677fa4/pause returned error: cannot pause container 018718677fa4719d51c4ede331131e4149b71e760e47742a99b4ac8ead79c73f: OCI runtime pause failed: unable to freeze: unknown" spanID=22afc4cf088aebad traceID=f3aa1e793ffda908a2ffc7580ffe5a84
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a91aebc02bd94       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        4 minutes ago       Running             kubernetes-dashboard      0                   aa021e09b86aa       kubernetes-dashboard-779776cb65-rm6z4
	889b473e730e9       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   7109dcea2fb66       storage-provisioner
	f0cb058e3c678       56cc512116c8f                                                                                         4 minutes ago       Running             busybox                   1                   ff9a54180054e       busybox
	8fcc6fb728ce0       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   d46d56ef2e5d6       coredns-7db6d8ff4d-lqgnn
	55e8f1e4d6d5c       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                1                   c9709ee156512       kube-proxy-ztwp2
	ee111c323bdc2       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   7109dcea2fb66       storage-provisioner
	00800d2623fc8       259c8277fcbbc                                                                                         5 minutes ago       Running             kube-scheduler            1                   1b2ebfd091244       kube-scheduler-default-k8s-diff-port-062300
	95b301adb6c0b       c7aad43836fa5                                                                                         5 minutes ago       Running             kube-controller-manager   1                   943fb30ac3369       kube-controller-manager-default-k8s-diff-port-062300
	018718677fa47       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      1                   ea65c8d787cfe       etcd-default-k8s-diff-port-062300
	c12856199872e       c42f13656d0b2                                                                                         5 minutes ago       Running             kube-apiserver            1                   f54040a2d27f3       kube-apiserver-default-k8s-diff-port-062300
	72fe335cbf9ed       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              busybox                   0                   17d4ee55244c2       busybox
	f8616bb48bd47       cbb01a7bd410d                                                                                         6 minutes ago       Exited              coredns                   0                   81e6f970b6247       coredns-7db6d8ff4d-lqgnn
	de63ce128d965       a0bf559e280cf                                                                                         6 minutes ago       Exited              kube-proxy                0                   eeebabd6443e5       kube-proxy-ztwp2
	f2d573001dd7e       c7aad43836fa5                                                                                         6 minutes ago       Exited              kube-controller-manager   0                   8034e5ae8850e       kube-controller-manager-default-k8s-diff-port-062300
	4cd223df9b05c       c42f13656d0b2                                                                                         6 minutes ago       Exited              kube-apiserver            0                   d7971dc1c6fc5       kube-apiserver-default-k8s-diff-port-062300
	d5fef62ceb9a3       259c8277fcbbc                                                                                         6 minutes ago       Exited              kube-scheduler            0                   6ff4b37ae8e9f       kube-scheduler-default-k8s-diff-port-062300
	d42428e694b5c       3861cfcd7c04c                                                                                         6 minutes ago       Exited              etcd                      0                   97cbb880cdb83       etcd-default-k8s-diff-port-062300
	
	
	==> coredns [8fcc6fb728ce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47174 - 41907 "HINFO IN 5898933462255465371.2595947378248807920. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.054238519s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[604374891]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:46:58.495) (total time: 21050ms):
	Trace[604374891]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21050ms (23:47:19.542)
	Trace[604374891]: [21.05066262s] [21.05066262s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[352732049]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:46:58.494) (total time: 21051ms):
	Trace[352732049]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21050ms (23:47:19.542)
	Trace[352732049]: [21.05139754s] [21.05139754s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1622575344]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:46:58.497) (total time: 21049ms):
	Trace[1622575344]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21048ms (23:47:19.542)
	Trace[1622575344]: [21.04939462s] [21.04939462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [f8616bb48bd4] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[965457935]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:45:28.185) (total time: 21032ms):
	Trace[965457935]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21031ms (23:45:49.213)
	Trace[965457935]: [21.032218695s] [21.032218695s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1052278361]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:45:28.185) (total time: 21032ms):
	Trace[1052278361]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21031ms (23:45:49.213)
	Trace[1052278361]: [21.032102279s] [21.032102279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1496653283]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:45:28.185) (total time: 21032ms):
	Trace[1496653283]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21031ms (23:45:49.213)
	Trace[1496653283]: [21.032259402s] [21.032259402s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:60908 - 12213 "HINFO IN 3758755791337204650.2829984788687160444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.06739987s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[May13 23:36] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [018718677fa4] <==
	{"level":"info","ts":"2024-05-13T23:48:30.681344Z","caller":"traceutil/trace.go:171","msg":"trace[1483908581] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl; range_end:; response_count:1; response_revision:741; }","duration":"396.831515ms","start":"2024-05-13T23:48:30.284497Z","end":"2024-05-13T23:48:30.681328Z","steps":["trace[1483908581] 'agreement among raft nodes before linearized reading'  (duration: 396.638486ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:48:30.681381Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:48:30.284479Z","time spent":"396.892425ms","remote":"127.0.0.1:36422","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":1,"response size":4111,"request content":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl\" "}
	{"level":"warn","ts":"2024-05-13T23:48:30.681384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.65030255s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-13T23:48:30.681497Z","caller":"traceutil/trace.go:171","msg":"trace[723033882] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:741; }","duration":"1.650361159s","start":"2024-05-13T23:48:29.031053Z","end":"2024-05-13T23:48:30.681415Z","steps":["trace[723033882] 'agreement among raft nodes before linearized reading'  (duration: 1.650295049s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:48:30.681673Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:48:29.031047Z","time spent":"1.650551388s","remote":"127.0.0.1:36240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-05-13T23:48:30.681692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.646287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-qgl9s.17cf31251ac5d369\" ","response":"range_response_count:1 size:804"}
	{"level":"warn","ts":"2024-05-13T23:48:30.68138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.036742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" ","response":"range_response_count:1 size:4247"}
	{"level":"info","ts":"2024-05-13T23:48:30.682018Z","caller":"traceutil/trace.go:171","msg":"trace[1121107047] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s; range_end:; response_count:1; response_revision:741; }","duration":"395.617131ms","start":"2024-05-13T23:48:30.286319Z","end":"2024-05-13T23:48:30.681937Z","steps":["trace[1121107047] 'agreement among raft nodes before linearized reading'  (duration: 394.980434ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:48:30.682069Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:48:30.286312Z","time spent":"395.741649ms","remote":"127.0.0.1:36422","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4270,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" "}
	{"level":"info","ts":"2024-05-13T23:48:30.681909Z","caller":"traceutil/trace.go:171","msg":"trace[730874148] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-qgl9s.17cf31251ac5d369; range_end:; response_count:1; response_revision:741; }","duration":"396.771606ms","start":"2024-05-13T23:48:30.285018Z","end":"2024-05-13T23:48:30.681789Z","steps":["trace[730874148] 'agreement among raft nodes before linearized reading'  (duration: 396.383247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:48:30.682189Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:48:30.285009Z","time spent":"397.165166ms","remote":"127.0.0.1:36294","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":827,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-qgl9s.17cf31251ac5d369\" "}
	{"level":"info","ts":"2024-05-13T23:49:31.265527Z","caller":"traceutil/trace.go:171","msg":"trace[275386241] transaction","detail":"{read_only:false; response_revision:813; number_of_response:1; }","duration":"381.547847ms","start":"2024-05-13T23:49:30.883476Z","end":"2024-05-13T23:49:31.265024Z","steps":["trace[275386241] 'process raft request'  (duration: 381.398832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:49:31.265662Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.02781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" ","response":"range_response_count:1 size:4247"}
	{"level":"warn","ts":"2024-05-13T23:49:31.265701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:49:30.883464Z","time spent":"382.157206ms","remote":"127.0.0.1:36404","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:811 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-05-13T23:49:31.265713Z","caller":"traceutil/trace.go:171","msg":"trace[1286711214] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s; range_end:; response_count:1; response_revision:813; }","duration":"291.116719ms","start":"2024-05-13T23:49:30.974584Z","end":"2024-05-13T23:49:31.265701Z","steps":["trace[1286711214] 'agreement among raft nodes before linearized reading'  (duration: 290.948603ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:49:31.265528Z","caller":"traceutil/trace.go:171","msg":"trace[1409489639] linearizableReadLoop","detail":"{readStateIndex:872; appliedIndex:872; }","duration":"290.892897ms","start":"2024-05-13T23:49:30.974616Z","end":"2024-05-13T23:49:31.265508Z","steps":["trace[1409489639] 'read index received'  (duration: 290.884696ms)","trace[1409489639] 'applied index is now lower than readState.Index'  (duration: 6.601µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T23:49:32.371722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"894.817853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" ","response":"range_response_count:1 size:4247"}
	{"level":"info","ts":"2024-05-13T23:49:32.371861Z","caller":"traceutil/trace.go:171","msg":"trace[1971644184] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s; range_end:; response_count:1; response_revision:813; }","duration":"894.990669ms","start":"2024-05-13T23:49:31.476851Z","end":"2024-05-13T23:49:32.371841Z","steps":["trace[1971644184] 'range keys from in-memory index tree'  (duration: 894.655537ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:49:32.371914Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:49:31.476836Z","time spent":"895.062777ms","remote":"127.0.0.1:36422","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4270,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" "}
	{"level":"info","ts":"2024-05-13T23:49:33.41259Z","caller":"traceutil/trace.go:171","msg":"trace[222600303] linearizableReadLoop","detail":"{readStateIndex:873; appliedIndex:872; }","duration":"139.785746ms","start":"2024-05-13T23:49:33.272786Z","end":"2024-05-13T23:49:33.412572Z","steps":["trace[222600303] 'read index received'  (duration: 132.360021ms)","trace[222600303] 'applied index is now lower than readState.Index'  (duration: 7.423925ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T23:49:33.412782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.969464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-05-13T23:49:33.412828Z","caller":"traceutil/trace.go:171","msg":"trace[1878700955] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:813; }","duration":"140.03177ms","start":"2024-05-13T23:49:33.272781Z","end":"2024-05-13T23:49:33.412812Z","steps":["trace[1878700955] 'agreement among raft nodes before linearized reading'  (duration: 139.861054ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:51:40.479969Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"3.845794921s","expected-duration":"1s"}
	{"level":"info","ts":"2024-05-13T23:51:40.481053Z","caller":"traceutil/trace.go:171","msg":"trace[1797797404] transaction","detail":"{read_only:false; response_revision:916; number_of_response:1; }","duration":"3.847205333s","start":"2024-05-13T23:51:36.633826Z","end":"2024-05-13T23:51:40.481031Z","steps":["trace[1797797404] 'process raft request'  (duration: 3.846876184s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:51:40.481395Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:51:36.63379Z","time spent":"3.847412864s","remote":"127.0.0.1:36404","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:915 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> etcd [d42428e694b5] <==
	{"level":"info","ts":"2024-05-13T23:45:09.900972Z","caller":"traceutil/trace.go:171","msg":"trace[1060432605] transaction","detail":"{read_only:false; response_revision:240; number_of_response:1; }","duration":"104.633781ms","start":"2024-05-13T23:45:09.796269Z","end":"2024-05-13T23:45:09.900903Z","steps":["trace[1060432605] 'process raft request'  (duration: 90.156142ms)","trace[1060432605] 'compare'  (duration: 14.187094ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:09.901144Z","caller":"traceutil/trace.go:171","msg":"trace[653386246] transaction","detail":"{read_only:false; response_revision:242; number_of_response:1; }","duration":"101.785641ms","start":"2024-05-13T23:45:09.799344Z","end":"2024-05-13T23:45:09.90113Z","steps":["trace[653386246] 'process raft request'  (duration: 101.501797ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:45:09.901135Z","caller":"traceutil/trace.go:171","msg":"trace[715200547] linearizableReadLoop","detail":"{readStateIndex:247; appliedIndex:245; }","duration":"102.581164ms","start":"2024-05-13T23:45:09.798543Z","end":"2024-05-13T23:45:09.901124Z","steps":["trace[715200547] 'read index received'  (duration: 87.891992ms)","trace[715200547] 'applied index is now lower than readState.Index'  (duration: 14.687972ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:09.901515Z","caller":"traceutil/trace.go:171","msg":"trace[924140122] transaction","detail":"{read_only:false; response_revision:241; number_of_response:1; }","duration":"105.156362ms","start":"2024-05-13T23:45:09.796341Z","end":"2024-05-13T23:45:09.901497Z","steps":["trace[924140122] 'process raft request'  (duration: 104.404545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:45:09.903018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.934328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/default-k8s-diff-port-062300\" ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2024-05-13T23:45:09.903075Z","caller":"traceutil/trace.go:171","msg":"trace[1306332714] range","detail":"{range_begin:/registry/csinodes/default-k8s-diff-port-062300; range_end:; response_count:1; response_revision:243; }","duration":"105.025041ms","start":"2024-05-13T23:45:09.798035Z","end":"2024-05-13T23:45:09.903061Z","steps":["trace[1306332714] 'agreement among raft nodes before linearized reading'  (duration: 103.299074ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:45:23.708604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.977038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-05-13T23:45:23.708783Z","caller":"traceutil/trace.go:171","msg":"trace[1498049944] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:331; }","duration":"108.191971ms","start":"2024-05-13T23:45:23.60057Z","end":"2024-05-13T23:45:23.708762Z","steps":["trace[1498049944] 'agreement among raft nodes before linearized reading'  (duration: 79.185694ms)","trace[1498049944] 'range keys from in-memory index tree'  (duration: 28.797544ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:23.708899Z","caller":"traceutil/trace.go:171","msg":"trace[621709282] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"107.650687ms","start":"2024-05-13T23:45:23.601226Z","end":"2024-05-13T23:45:23.708877Z","steps":["trace[621709282] 'process raft request'  (duration: 88.548772ms)","trace[621709282] 'compare'  (duration: 18.469715ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:23.709053Z","caller":"traceutil/trace.go:171","msg":"trace[901368942] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"107.794009ms","start":"2024-05-13T23:45:23.601242Z","end":"2024-05-13T23:45:23.709036Z","steps":["trace[901368942] 'process raft request'  (duration: 107.497462ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:45:24.024454Z","caller":"traceutil/trace.go:171","msg":"trace[1139136533] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"119.978331ms","start":"2024-05-13T23:45:23.904449Z","end":"2024-05-13T23:45:24.024427Z","steps":["trace[1139136533] 'process raft request'  (duration: 119.740094ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:45:24.024851Z","caller":"traceutil/trace.go:171","msg":"trace[490703960] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"120.585728ms","start":"2024-05-13T23:45:23.904249Z","end":"2024-05-13T23:45:24.024835Z","steps":["trace[490703960] 'process raft request'  (duration: 103.61465ms)","trace[490703960] 'compare'  (duration: 15.006668ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:24.383347Z","caller":"traceutil/trace.go:171","msg":"trace[2061403002] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"104.381571ms","start":"2024-05-13T23:45:24.27889Z","end":"2024-05-13T23:45:24.383271Z","steps":["trace[2061403002] 'process raft request'  (duration: 26.80563ms)","trace[2061403002] 'compare'  (duration: 77.111067ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:26.929399Z","caller":"traceutil/trace.go:171","msg":"trace[846336472] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"108.687737ms","start":"2024-05-13T23:45:26.820562Z","end":"2024-05-13T23:45:26.92925Z","steps":["trace[846336472] 'process raft request'  (duration: 58.165065ms)","trace[846336472] 'compare'  (duration: 50.270937ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:27.998646Z","caller":"traceutil/trace.go:171","msg":"trace[1512643704] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"110.563357ms","start":"2024-05-13T23:45:27.888049Z","end":"2024-05-13T23:45:27.998612Z","steps":["trace[1512643704] 'process raft request'  (duration: 90.143839ms)","trace[1512643704] 'compare'  (duration: 20.190987ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:46:11.068429Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-13T23:46:11.068778Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"default-k8s-diff-port-062300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.130.2:2380"],"advertise-client-urls":["https://192.168.130.2:2379"]}
	{"level":"warn","ts":"2024-05-13T23:46:11.068969Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T23:46:11.069347Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T23:46:11.268299Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.130.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T23:46:11.268456Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.130.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-13T23:46:11.268546Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cb2ab03b8dc334b","current-leader-member-id":"cb2ab03b8dc334b"}
	{"level":"info","ts":"2024-05-13T23:46:11.367709Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.130.2:2380"}
	{"level":"info","ts":"2024-05-13T23:46:11.367967Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.130.2:2380"}
	{"level":"info","ts":"2024-05-13T23:46:11.367991Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"default-k8s-diff-port-062300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.130.2:2380"],"advertise-client-urls":["https://192.168.130.2:2379"]}
	
	
	==> kernel <==
	 23:51:58 up  2:55,  0 users,  load average: 7.09, 6.70, 5.99
	Linux default-k8s-diff-port-062300 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [4cd223df9b05] <==
	W0513 23:46:20.694713       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.736164       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.736215       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.754579       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.769638       1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.788756       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.799758       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.815845       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.816050       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.821270       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.827330       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.838376       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.846372       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.850939       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.887240       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.928592       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.959483       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.996734       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.023154       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.072389       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.088600       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.095237       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.102360       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.118628       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.169513       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c12856199872] <==
	Trace[1546878091]: ---"Object stored in database" 1260ms (23:48:30.279)
	Trace[1546878091]: [4.80141009s] [4.80141009s] END
	I0513 23:48:30.280039       1 trace.go:236] Trace[889961165]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:174180f9-42ae-4805-ace9-8c9e3acf1da7,client:192.168.130.2,api-group:,api-version:v1,name:metrics-server-569cc877fc-qgl9s,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/metrics-server-569cc877fc-qgl9s/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (13-May-2024 23:48:29.024) (total time: 1255ms):
	Trace[889961165]: ["GuaranteedUpdate etcd3" audit-id:174180f9-42ae-4805-ace9-8c9e3acf1da7,key:/pods/kube-system/metrics-server-569cc877fc-qgl9s,type:*core.Pod,resource:pods 1255ms (23:48:29.024)
	Trace[889961165]:  ---"Txn call completed" 1251ms (23:48:30.279)]
	Trace[889961165]: ---"Object stored in database" 1251ms (23:48:30.279)
	Trace[889961165]: [1.255437733s] [1.255437733s] END
	I0513 23:48:30.280640       1 trace.go:236] Trace[765578303]: "Get" accept:application/json, */*,audit-id:1f54dd2f-62bd-4fac-960a-37720da63fa1,client:192.168.130.1,api-group:,api-version:v1,name:default-k8s-diff-port-062300,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/default-k8s-diff-port-062300,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (13-May-2024 23:48:29.025) (total time: 1255ms):
	Trace[765578303]: ---"About to write a response" 1254ms (23:48:30.279)
	Trace[765578303]: [1.255527447s] [1.255527447s] END
	I0513 23:48:30.683069       1 trace.go:236] Trace[599239996]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.130.2,type:*v1.Endpoints,resource:apiServerIPInfo (13-May-2024 23:48:28.065) (total time: 2617ms):
	Trace[599239996]: ---"initial value restored" 952ms (23:48:29.017)
	Trace[599239996]: ---"Transaction prepared" 742ms (23:48:29.759)
	Trace[599239996]: ---"Txn call completed" 923ms (23:48:30.682)
	Trace[599239996]: [2.617734536s] [2.617734536s] END
	I0513 23:49:32.374365       1 trace.go:236] Trace[182514630]: "Get" accept:application/json, */*,audit-id:9afd0261-ce11-4b70-8089-d7eae6ef2500,client:192.168.130.1,api-group:,api-version:v1,name:metrics-server-569cc877fc-qgl9s,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/metrics-server-569cc877fc-qgl9s,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (13-May-2024 23:49:31.475) (total time: 898ms):
	Trace[182514630]: ---"About to write a response" 898ms (23:49:32.373)
	Trace[182514630]: [898.493212ms] [898.493212ms] END
	W0513 23:49:52.387333       1 handler_proxy.go:93] no RequestInfo found in the context
	E0513 23:49:52.387611       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0513 23:49:52.387626       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0513 23:49:52.389778       1 handler_proxy.go:93] no RequestInfo found in the context
	E0513 23:49:52.390019       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0513 23:49:52.390031       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [95b301adb6c0] <==
	I0513 23:48:11.513060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="56.609µs"
	I0513 23:48:16.514410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="74.712µs"
	I0513 23:48:30.280519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="184.728µs"
	I0513 23:48:30.760639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="164.225µs"
	E0513 23:48:40.020973       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:48:40.510138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0513 23:48:43.499529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="68.811µs"
	I0513 23:49:03.520584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="77.211µs"
	E0513 23:49:10.025176       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:49:10.517869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0513 23:49:11.494194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="45.608µs"
	I0513 23:49:15.512558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="263.841µs"
	I0513 23:49:25.503350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="72.311µs"
	E0513 23:49:40.026728       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:49:40.523680       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0513 23:50:10.040372       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:50:10.534426       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0513 23:50:34.498558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="387.75µs"
	E0513 23:50:40.044580       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:50:40.545353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0513 23:50:41.477355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="128.219µs"
	I0513 23:50:47.482291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="141.821µs"
	I0513 23:50:55.493409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="137.721µs"
	E0513 23:51:10.066780       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:51:10.564584       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [f2d573001dd7] <==
	I0513 23:45:22.795006       1 shared_informer.go:320] Caches are synced for resource quota
	I0513 23:45:22.810374       1 shared_informer.go:320] Caches are synced for expand
	I0513 23:45:23.170807       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 23:45:23.170924       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0513 23:45:23.181352       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 23:45:24.178791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="852.14026ms"
	I0513 23:45:24.386910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="207.920508ms"
	I0513 23:45:24.502329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.220581ms"
	I0513 23:45:24.502505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.31µs"
	I0513 23:45:25.581756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="240.981325ms"
	I0513 23:45:25.614224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.309398ms"
	I0513 23:45:25.780863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="166.559382ms"
	I0513 23:45:25.781018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.314µs"
	I0513 23:45:27.978907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="211.629µs"
	I0513 23:45:28.408138       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.41µs"
	I0513 23:45:38.202464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.814µs"
	I0513 23:45:38.530709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.308µs"
	I0513 23:45:38.551619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.31µs"
	I0513 23:45:38.579785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="166.619µs"
	I0513 23:45:54.532453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.975116ms"
	I0513 23:45:54.532713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.81µs"
	I0513 23:46:08.998660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="71.682462ms"
	I0513 23:46:09.068868       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="70.145416ms"
	I0513 23:46:09.069414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="58.709µs"
	I0513 23:46:09.097753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="60.71µs"
	
	
	==> kube-proxy [55e8f1e4d6d5] <==
	I0513 23:46:57.783625       1 server_linux.go:69] "Using iptables proxy"
	I0513 23:46:57.818220       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.130.2"]
	I0513 23:46:57.976354       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0513 23:46:57.976515       1 server_linux.go:165] "Using iptables Proxier"
	I0513 23:46:57.983085       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0513 23:46:57.983209       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0513 23:46:57.983277       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 23:46:57.984459       1 server.go:872] "Version info" version="v1.30.0"
	I0513 23:46:57.984514       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:46:57.992485       1 config.go:101] "Starting endpoint slice config controller"
	I0513 23:46:57.993081       1 config.go:319] "Starting node config controller"
	I0513 23:46:57.993523       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 23:46:57.992574       1 config.go:192] "Starting service config controller"
	I0513 23:46:57.993851       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 23:46:57.994386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 23:46:58.093785       1 shared_informer.go:320] Caches are synced for node config
	I0513 23:46:58.094244       1 shared_informer.go:320] Caches are synced for service config
	I0513 23:46:58.094678       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de63ce128d96] <==
	I0513 23:45:27.304314       1 server_linux.go:69] "Using iptables proxy"
	I0513 23:45:27.334145       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.130.2"]
	I0513 23:45:27.499214       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0513 23:45:27.499439       1 server_linux.go:165] "Using iptables Proxier"
	I0513 23:45:27.507782       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0513 23:45:27.507921       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0513 23:45:27.507949       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 23:45:27.508510       1 server.go:872] "Version info" version="v1.30.0"
	I0513 23:45:27.508566       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:45:27.515371       1 config.go:192] "Starting service config controller"
	I0513 23:45:27.515503       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 23:45:27.515556       1 config.go:101] "Starting endpoint slice config controller"
	I0513 23:45:27.515705       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 23:45:27.517322       1 config.go:319] "Starting node config controller"
	I0513 23:45:27.517340       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 23:45:27.679165       1 shared_informer.go:320] Caches are synced for node config
	I0513 23:45:27.615944       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 23:45:27.679942       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [00800d2623fc] <==
	I0513 23:46:47.108205       1 serving.go:380] Generated self-signed cert in-memory
	W0513 23:46:51.269828       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0513 23:46:51.281776       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 23:46:51.367720       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0513 23:46:51.370325       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0513 23:46:51.573692       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0513 23:46:51.573727       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:46:51.577687       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0513 23:46:51.577955       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0513 23:46:51.577977       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 23:46:51.578010       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0513 23:46:51.678068       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d5fef62ceb9a] <==
	E0513 23:45:06.398524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 23:45:06.542412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 23:45:06.542538       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 23:45:06.543332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 23:45:06.543448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0513 23:45:06.545866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 23:45:06.545968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0513 23:45:06.604824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0513 23:45:06.605003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0513 23:45:06.627960       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:06.628156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0513 23:45:06.668195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:06.668304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0513 23:45:06.859654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 23:45:06.859764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0513 23:45:06.882283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:06.882489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0513 23:45:06.909664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 23:45:06.910000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0513 23:45:08.057746       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 23:45:08.057901       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0513 23:45:13.906545       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 23:46:11.067919       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0513 23:46:11.068456       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0513 23:46:11.068472       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 13 23:50:05 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:05.464911    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:11 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:11.470239    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:50:18 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:18.464953    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:22 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:22.520817    1471 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 13 23:50:22 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:22.520955    1471 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 13 23:50:22 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:22.521226    1471 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hws4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-qgl9s_kube-system(60a7c560-5161-4bf4-9c27-0114cb776da6): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	May 13 23:50:22 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:22.521264    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:50:29 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:29.944231    1471 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	May 13 23:50:29 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:29.944302    1471 kuberuntime_image.go:55] "Failed to pull image" err="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	May 13 23:50:29 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:29.944621    1471 kuberuntime_manager.go:1256] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bms8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10
,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-654756847f-rfwsl_kubernetes-dashboard(44a20794-3cf6-4968-adb5-ef86df25b880): ErrImagePull: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the ima
ge to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	May 13 23:50:29 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:29.944686    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:34 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:34.471021    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:50:41 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:41.460682    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:47 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:47.461333    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:50:55 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:55.461540    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:58 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:58.466453    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:51:10 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:10.458524    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:51:10 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:10.458632    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:51:24 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:24.459047    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:51:25 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:25.457006    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:51:36 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:36.468741    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:51:36 default-k8s-diff-port-062300 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 13 23:51:36 default-k8s-diff-port-062300 kubelet[1471]: I0513 23:51:36.669136    1471 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	May 13 23:51:36 default-k8s-diff-port-062300 systemd[1]: kubelet.service: Deactivated successfully.
	May 13 23:51:36 default-k8s-diff-port-062300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a91aebc02bd9] <==
	2024/05/13 23:47:34 Starting overwatch
	2024/05/13 23:47:34 Using namespace: kubernetes-dashboard
	2024/05/13 23:47:34 Using in-cluster config to connect to apiserver
	2024/05/13 23:47:34 Using secret token for csrf signing
	2024/05/13 23:47:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/05/13 23:47:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/05/13 23:47:34 Successful initial request to the apiserver, version: v1.30.0
	2024/05/13 23:47:34 Generating JWE encryption key
	2024/05/13 23:47:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/05/13 23:47:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/05/13 23:47:35 Initializing JWE encryption key from synchronized object
	2024/05/13 23:47:35 Creating in-cluster Sidecar client
	2024/05/13 23:47:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:47:35 Serving insecurely on HTTP port: 9090
	2024/05/13 23:48:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:48:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:49:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:49:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:50:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:50:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:51:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:51:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [889b473e730e] <==
	I0513 23:47:33.974594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0513 23:47:33.995668       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0513 23:47:33.996404       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0513 23:47:51.448284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0513 23:47:51.448869       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-062300_50754299-4093-42c0-8e09-2d0748e282ef!
	I0513 23:47:51.448867       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1224bdee-850c-49b7-90b1-e35e23f8e17a", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-062300_50754299-4093-42c0-8e09-2d0748e282ef became leader
	I0513 23:47:51.550436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-062300_50754299-4093-42c0-8e09-2d0748e282ef!
	
	
	==> storage-provisioner [ee111c323bdc] <==
	I0513 23:46:56.775399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0513 23:47:17.830173       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:51:45.275336    8276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300: exit status 2 (1.6912705s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:51:59.696835    4932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "default-k8s-diff-port-062300" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-062300
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-062300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb",
	        "Created": "2024-05-13T23:44:16.851168338Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 290363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-13T23:46:26.630756816Z",
	            "FinishedAt": "2024-05-13T23:46:21.989713432Z"
	        },
	        "Image": "sha256:5a6e59a9bdc0d32876fd51e3702c6cb16f38b145ed5528e5f0bfb1de21e70803",
	        "ResolvConfPath": "/var/lib/docker/containers/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb/hosts",
	        "LogPath": "/var/lib/docker/containers/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb/353c61b39d334fa83b55d53a55ef25ef4a0a707f490cd46c1d3ccaf606d4d1eb-json.log",
	        "Name": "/default-k8s-diff-port-062300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-062300:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-062300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/115411ea20ab19e2259b2ebacc50b96e6447f04995203753fd9b555d68858dc7-init/diff:/var/lib/docker/overlay2/e3065cc89db7a8fd6915450a1724667534193c4a9eb8348f67381d1430bd11e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/115411ea20ab19e2259b2ebacc50b96e6447f04995203753fd9b555d68858dc7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/115411ea20ab19e2259b2ebacc50b96e6447f04995203753fd9b555d68858dc7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/115411ea20ab19e2259b2ebacc50b96e6447f04995203753fd9b555d68858dc7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-062300",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-062300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-062300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-062300",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-062300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7b7eac08a28f99efc319dd68156dc2a0e6fd356e15a817bf93e12ac990ebb71",
	            "SandboxKey": "/var/run/docker/netns/e7b7eac08a28",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56521"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56522"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56523"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56524"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56525"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-062300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.130.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:82:02",
	                    "NetworkID": "9836f4835f9af64b31b0bca28aacb94567d42531bf87483afa9fd5dddc13a768",
	                    "EndpointID": "4b17d6957ee6a53774e64a51314c2c5320c4f6db7fd4ecbbf834ad43e5e9c011",
	                    "Gateway": "192.168.130.1",
	                    "IPAddress": "192.168.130.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "default-k8s-diff-port-062300",
	                        "353c61b39d33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300: exit status 2 (1.5804967s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:52:01.594670    6040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-062300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-062300 logs -n 25: (14.0058079s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:50 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-062300  | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | default-k8s-diff-port-062300                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p no-preload-561500                  | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:51 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr                                      |                              |                   |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-062300       | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:46 UTC | 13 May 24 23:51 UTC |
	|         | default-k8s-diff-port-062300                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-873100        | old-k8s-version-873100       | minikube4\jenkins | v1.33.1 | 13 May 24 23:47 UTC | 13 May 24 23:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p old-k8s-version-873100                              | old-k8s-version-873100       | minikube4\jenkins | v1.33.1 | 13 May 24 23:47 UTC | 13 May 24 23:47 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-873100             | old-k8s-version-873100       | minikube4\jenkins | v1.33.1 | 13 May 24 23:47 UTC | 13 May 24 23:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p old-k8s-version-873100                              | old-k8s-version-873100       | minikube4\jenkins | v1.33.1 | 13 May 24 23:47 UTC |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                              |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |                   |         |                     |                     |
	|         | --keep-context=false                                   |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |                   |         |                     |                     |
	| image   | embed-certs-524600 image list                          | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:50 UTC | 13 May 24 23:50 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:50 UTC | 13 May 24 23:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	| delete  | -p embed-certs-524600                                  | embed-certs-524600           | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	| start   | -p newest-cni-949100 --memory=2200 --alsologtostderr   | newest-cni-949100            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.30.0           |                              |                   |         |                     |                     |
	| image   | no-preload-561500 image list                           | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	| image   | default-k8s-diff-port-062300                           | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-062300 | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC |                     |
	|         | default-k8s-diff-port-062300                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p no-preload-561500                                   | no-preload-561500            | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:51 UTC |
	| start   | -p auto-589900 --memory=3072                           | auto-589900                  | minikube4\jenkins | v1.33.1 | 13 May 24 23:51 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 23:51:49
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 23:51:49.826017   15836 out.go:291] Setting OutFile to fd 1624 ...
	I0513 23:51:49.826411   15836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:51:49.826411   15836 out.go:304] Setting ErrFile to fd 1504...
	I0513 23:51:49.826411   15836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:51:49.850914   15836 out.go:298] Setting JSON to false
	I0513 23:51:49.855693   15836 start.go:129] hostinfo: {"hostname":"minikube4","uptime":10548,"bootTime":1715633761,"procs":213,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 23:51:49.855693   15836 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 23:51:49.860452   15836 out.go:177] * [auto-589900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 23:51:49.867298   15836 notify.go:220] Checking for updates...
	I0513 23:51:49.872210   15836 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 23:51:49.876278   15836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 23:51:49.881797   15836 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 23:51:49.884481   15836 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 23:51:49.893206   15836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 23:51:49.898633   15836 config.go:182] Loaded profile config "default-k8s-diff-port-062300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:51:49.898848   15836 config.go:182] Loaded profile config "newest-cni-949100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:51:49.899496   15836 config.go:182] Loaded profile config "old-k8s-version-873100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 23:51:49.899496   15836 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 23:51:50.233032   15836 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 23:51:50.245046   15836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 23:51:50.660156   15836 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:97 SystemTime:2024-05-13 23:51:50.610205555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 23:51:50.665120   15836 out.go:177] * Using the docker driver based on user configuration
	I0513 23:51:46.419891    7080 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-949100 --name newest-cni-949100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-949100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-949100 --network newest-cni-949100 --ip 192.168.76.2 --volume newest-cni-949100:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e: (2.2402927s)
	I0513 23:51:46.439755    7080 cli_runner.go:164] Run: docker container inspect newest-cni-949100 --format={{.State.Running}}
	I0513 23:51:46.709199    7080 cli_runner.go:164] Run: docker container inspect newest-cni-949100 --format={{.State.Status}}
	I0513 23:51:46.948755    7080 cli_runner.go:164] Run: docker exec newest-cni-949100 stat /var/lib/dpkg/alternatives/iptables
	I0513 23:51:47.367780    7080 oci.go:144] the created container "newest-cni-949100" has a running status.
	I0513 23:51:47.367930    7080 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-949100\id_rsa...
	I0513 23:51:47.833703    7080 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-949100\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0513 23:51:48.126026    7080 cli_runner.go:164] Run: docker container inspect newest-cni-949100 --format={{.State.Status}}
	I0513 23:51:48.349613    7080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0513 23:51:48.349613    7080 kic_runner.go:114] Args: [docker exec --privileged newest-cni-949100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0513 23:51:48.619671    7080 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-949100\id_rsa...
	I0513 23:51:46.683703   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:49.197325   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:50.669742   15836 start.go:297] selected driver: docker
	I0513 23:51:50.669891   15836 start.go:901] validating driver "docker" against <nil>
	I0513 23:51:50.670037   15836 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 23:51:50.741550   15836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 23:51:51.127511   15836 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:97 SystemTime:2024-05-13 23:51:51.091450038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 23:51:51.128402   15836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 23:51:51.128821   15836 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:51:51.135378   15836 out.go:177] * Using Docker Desktop driver with root privileges
	I0513 23:51:51.139208   15836 cni.go:84] Creating CNI manager for ""
	I0513 23:51:51.139275   15836 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 23:51:51.139340   15836 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 23:51:51.139546   15836 start.go:340] cluster config:
	{Name:auto-589900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-589900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0513 23:51:51.144447   15836 out.go:177] * Starting "auto-589900" primary control-plane node in "auto-589900" cluster
	I0513 23:51:51.147208   15836 cache.go:121] Beginning downloading kic base image for docker with docker
	I0513 23:51:51.150751   15836 out.go:177] * Pulling base image v0.0.44 ...
	I0513 23:51:51.156612   15836 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:51:51.156719   15836 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon
	I0513 23:51:51.156984   15836 preload.go:147] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 23:51:51.157047   15836 cache.go:56] Caching tarball of preloaded images
	I0513 23:51:51.157510   15836 preload.go:173] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:51:51.157764   15836 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 23:51:51.158145   15836 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\config.json ...
	I0513 23:51:51.158479   15836 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\config.json: {Name:mk38b2fd89d9b739357e085feb236fdd57909965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:51:51.375700   15836 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon, skipping pull
	I0513 23:51:51.375700   15836 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e exists in daemon, skipping load
	I0513 23:51:51.375700   15836 cache.go:194] Successfully downloaded all kic artifacts
	I0513 23:51:51.375700   15836 start.go:360] acquireMachinesLock for auto-589900: {Name:mk6918660d6a974eb4130b50247042dd7b5641b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 23:51:51.375700   15836 start.go:364] duration metric: took 0s to acquireMachinesLock for "auto-589900"
	I0513 23:51:51.376365   15836 start.go:93] Provisioning new machine with config: &{Name:auto-589900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-589900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:51:51.376557   15836 start.go:125] createHost starting for "" (driver="docker")
	I0513 23:51:51.383247   15836 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0513 23:51:51.384191   15836 start.go:159] libmachine.API.Create for "auto-589900" (driver="docker")
	I0513 23:51:51.384278   15836 client.go:168] LocalClient.Create starting
	I0513 23:51:51.385050   15836 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0513 23:51:51.385050   15836 main.go:141] libmachine: Decoding PEM data...
	I0513 23:51:51.385050   15836 main.go:141] libmachine: Parsing certificate...
	I0513 23:51:51.385050   15836 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0513 23:51:51.385678   15836 main.go:141] libmachine: Decoding PEM data...
	I0513 23:51:51.385678   15836 main.go:141] libmachine: Parsing certificate...
	I0513 23:51:51.393330   15836 cli_runner.go:164] Run: docker network inspect auto-589900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0513 23:51:51.585257   15836 cli_runner.go:211] docker network inspect auto-589900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0513 23:51:51.594165   15836 network_create.go:281] running [docker network inspect auto-589900] to gather additional debugging logs...
	I0513 23:51:51.594233   15836 cli_runner.go:164] Run: docker network inspect auto-589900
	W0513 23:51:51.796935   15836 cli_runner.go:211] docker network inspect auto-589900 returned with exit code 1
	I0513 23:51:51.796935   15836 network_create.go:284] error running [docker network inspect auto-589900]: docker network inspect auto-589900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-589900 not found
	I0513 23:51:51.796935   15836 network_create.go:286] output of [docker network inspect auto-589900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-589900 not found
	
	** /stderr **
	I0513 23:51:51.817583   15836 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0513 23:51:52.027645   15836 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:51:52.058000   15836 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:51:52.097716   15836 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:51:52.128762   15836 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0513 23:51:52.149997   15836 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014ee060}
	I0513 23:51:52.150111   15836 network_create.go:124] attempt to create docker network auto-589900 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0513 23:51:52.161396   15836 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-589900 auto-589900
	I0513 23:51:52.508647   15836 network_create.go:108] docker network auto-589900 192.168.85.0/24 created
	I0513 23:51:52.509929   15836 kic.go:121] calculated static IP "192.168.85.2" for the "auto-589900" container
	I0513 23:51:52.528395   15836 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0513 23:51:52.751035   15836 cli_runner.go:164] Run: docker volume create auto-589900 --label name.minikube.sigs.k8s.io=auto-589900 --label created_by.minikube.sigs.k8s.io=true
	I0513 23:51:52.958497   15836 oci.go:103] Successfully created a docker volume auto-589900
	I0513 23:51:52.972160   15836 cli_runner.go:164] Run: docker run --rm --name auto-589900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-589900 --entrypoint /usr/bin/test -v auto-589900:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib
	I0513 23:51:54.750540   15836 cli_runner.go:217] Completed: docker run --rm --name auto-589900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-589900 --entrypoint /usr/bin/test -v auto-589900:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib: (1.7777513s)
	I0513 23:51:54.750540   15836 oci.go:107] Successfully prepared a docker volume auto-589900
	I0513 23:51:54.750540   15836 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:51:54.750540   15836 kic.go:194] Starting extracting preloaded images to volume ...
	I0513 23:51:54.773692   15836 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-589900:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -I lz4 -xf /preloaded.tar -C /extractDir
	I0513 23:51:51.829785    7080 cli_runner.go:164] Run: docker container inspect newest-cni-949100 --format={{.State.Status}}
	I0513 23:51:52.017827    7080 machine.go:94] provisionDockerMachine start ...
	I0513 23:51:52.036459    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:52.233718    7080 main.go:141] libmachine: Using SSH client type: native
	I0513 23:51:52.242181    7080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56861 <nil> <nil>}
	I0513 23:51:52.242181    7080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 23:51:52.452902    7080 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-949100
	
	I0513 23:51:52.453422    7080 ubuntu.go:169] provisioning hostname "newest-cni-949100"
	I0513 23:51:52.463767    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:52.657442    7080 main.go:141] libmachine: Using SSH client type: native
	I0513 23:51:52.658145    7080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56861 <nil> <nil>}
	I0513 23:51:52.658175    7080 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-949100 && echo "newest-cni-949100" | sudo tee /etc/hostname
	I0513 23:51:52.912433    7080 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-949100
	
	I0513 23:51:52.921514    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:53.114620    7080 main.go:141] libmachine: Using SSH client type: native
	I0513 23:51:53.115274    7080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56861 <nil> <nil>}
	I0513 23:51:53.115274    7080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-949100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-949100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-949100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 23:51:53.348751    7080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 23:51:53.348751    7080 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0513 23:51:53.349038    7080 ubuntu.go:177] setting up certificates
	I0513 23:51:53.349038    7080 provision.go:84] configureAuth start
	I0513 23:51:53.370730    7080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949100
	I0513 23:51:53.552872    7080 provision.go:143] copyHostCerts
	I0513 23:51:53.554056    7080 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0513 23:51:53.554056    7080 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0513 23:51:53.554647    7080 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0513 23:51:53.555725    7080 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0513 23:51:53.555725    7080 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0513 23:51:53.555725    7080 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 23:51:53.557776    7080 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0513 23:51:53.557776    7080 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0513 23:51:53.559150    7080 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0513 23:51:53.560582    7080 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-949100 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-949100]
	I0513 23:51:53.724108    7080 provision.go:177] copyRemoteCerts
	I0513 23:51:53.734393    7080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 23:51:53.749187    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:53.953023    7080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56861 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-949100\id_rsa Username:docker}
	I0513 23:51:54.132475    7080 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0513 23:51:54.204574    7080 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0513 23:51:54.253815    7080 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 23:51:54.493063    7080 provision.go:87] duration metric: took 1.143973s to configureAuth
	I0513 23:51:54.493146    7080 ubuntu.go:193] setting minikube options for container-runtime
	I0513 23:51:54.493770    7080 config.go:182] Loaded profile config "newest-cni-949100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:51:54.504713    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:54.683616    7080 main.go:141] libmachine: Using SSH client type: native
	I0513 23:51:54.684040    7080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56861 <nil> <nil>}
	I0513 23:51:54.684040    7080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 23:51:54.905957    7080 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0513 23:51:54.905957    7080 ubuntu.go:71] root file system type: overlay
	I0513 23:51:54.906486    7080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 23:51:54.918029    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:55.106565    7080 main.go:141] libmachine: Using SSH client type: native
	I0513 23:51:55.106565    7080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56861 <nil> <nil>}
	I0513 23:51:55.107097    7080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 23:51:55.325344    7080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 23:51:55.337902    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:55.511075    7080 main.go:141] libmachine: Using SSH client type: native
	I0513 23:51:55.511735    7080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x55a3c0] 0x55cfa0 <nil>  [] 0s} 127.0.0.1 56861 <nil> <nil>}
	I0513 23:51:55.511812    7080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 23:51:51.678716   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:54.191847   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:58.111020    7080 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-04-30 11:46:26.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-13 23:51:55.306021375 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0513 23:51:58.111020    7080 machine.go:97] duration metric: took 6.0929151s to provisionDockerMachine
	I0513 23:51:58.111020    7080 client.go:171] duration metric: took 40.4938782s to LocalClient.Create
	I0513 23:51:58.111020    7080 start.go:167] duration metric: took 40.4938782s to libmachine.API.Create "newest-cni-949100"
	I0513 23:51:58.111556    7080 start.go:293] postStartSetup for "newest-cni-949100" (driver="docker")
	I0513 23:51:58.111623    7080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 23:51:58.129390    7080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 23:51:58.139821    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:58.330095    7080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56861 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-949100\id_rsa Username:docker}
	I0513 23:51:58.496352    7080 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 23:51:58.508517    7080 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0513 23:51:58.508517    7080 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0513 23:51:58.508517    7080 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0513 23:51:58.508517    7080 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0513 23:51:58.508517    7080 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0513 23:51:58.509264    7080 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0513 23:51:58.509264    7080 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem -> 158682.pem in /etc/ssl/certs
	I0513 23:51:58.527693    7080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 23:51:58.550159    7080 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\158682.pem --> /etc/ssl/certs/158682.pem (1708 bytes)
	I0513 23:51:58.607801    7080 start.go:296] duration metric: took 496.1048ms for postStartSetup
	I0513 23:51:58.623297    7080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949100
	I0513 23:51:58.865730    7080 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-949100\config.json ...
	I0513 23:51:58.885955    7080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 23:51:58.897077    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:59.112982    7080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56861 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-949100\id_rsa Username:docker}
	I0513 23:51:59.301669    7080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0513 23:51:59.333432    7080 start.go:128] duration metric: took 41.7279946s to createHost
	I0513 23:51:59.333624    7080 start.go:83] releasing machines lock for "newest-cni-949100", held for 41.7288529s
	I0513 23:51:59.349448    7080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949100
	I0513 23:51:59.651864    7080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 23:51:59.669413    7080 ssh_runner.go:195] Run: cat /version.json
	I0513 23:51:59.671504    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:59.694551    7080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949100
	I0513 23:51:59.915923    7080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56861 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-949100\id_rsa Username:docker}
	I0513 23:51:59.921240    7080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56861 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-949100\id_rsa Username:docker}
	I0513 23:52:00.087577    7080 ssh_runner.go:195] Run: systemctl --version
	I0513 23:52:00.283911    7080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 23:52:00.327409    7080 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0513 23:52:00.359263    7080 start.go:438] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0513 23:52:00.375180    7080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 23:52:00.457110    7080 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:52:00.457243    7080 start.go:494] detecting cgroup driver to use...
	I0513 23:52:00.457367    7080 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0513 23:52:00.457579    7080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:52:00.517215    7080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 23:52:00.588168    7080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:52:00.617884    7080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:52:00.630370    7080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:52:00.679812    7080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:52:00.726954    7080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:52:00.776448    7080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:52:00.818297    7080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:51:56.681122   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:51:58.683717   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	I0513 23:52:00.693547   10160 pod_ready.go:102] pod "metrics-server-9975d5f86-q9ldg" in "kube-system" namespace has status "Ready":"False"
	
	
	==> Docker <==
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:16 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:17 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:17 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:17 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:37 default-k8s-diff-port-062300 dockerd[1011]: time="2024-05-13T23:51:37.279774646Z" level=error msg="Handler for POST /v1.45/containers/018718677fa4/pause returned error: cannot pause container 018718677fa4719d51c4ede331131e4149b71e760e47742a99b4ac8ead79c73f: OCI runtime pause failed: unable to freeze: unknown" spanID=22afc4cf088aebad traceID=f3aa1e793ffda908a2ffc7580ffe5a84
	May 13 23:51:47 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:47 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:58 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:59 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:59 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:51:59 default-k8s-diff-port-062300 dockerd[1011]: 2024/05/13 23:51:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a91aebc02bd94       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        4 minutes ago       Running             kubernetes-dashboard      0                   aa021e09b86aa       kubernetes-dashboard-779776cb65-rm6z4
	889b473e730e9       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   7109dcea2fb66       storage-provisioner
	f0cb058e3c678       56cc512116c8f                                                                                         5 minutes ago       Running             busybox                   1                   ff9a54180054e       busybox
	8fcc6fb728ce0       cbb01a7bd410d                                                                                         5 minutes ago       Running             coredns                   1                   d46d56ef2e5d6       coredns-7db6d8ff4d-lqgnn
	55e8f1e4d6d5c       a0bf559e280cf                                                                                         5 minutes ago       Running             kube-proxy                1                   c9709ee156512       kube-proxy-ztwp2
	ee111c323bdc2       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   7109dcea2fb66       storage-provisioner
	00800d2623fc8       259c8277fcbbc                                                                                         5 minutes ago       Running             kube-scheduler            1                   1b2ebfd091244       kube-scheduler-default-k8s-diff-port-062300
	95b301adb6c0b       c7aad43836fa5                                                                                         5 minutes ago       Running             kube-controller-manager   1                   943fb30ac3369       kube-controller-manager-default-k8s-diff-port-062300
	018718677fa47       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      1                   ea65c8d787cfe       etcd-default-k8s-diff-port-062300
	c12856199872e       c42f13656d0b2                                                                                         5 minutes ago       Running             kube-apiserver            1                   f54040a2d27f3       kube-apiserver-default-k8s-diff-port-062300
	72fe335cbf9ed       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   17d4ee55244c2       busybox
	f8616bb48bd47       cbb01a7bd410d                                                                                         6 minutes ago       Exited              coredns                   0                   81e6f970b6247       coredns-7db6d8ff4d-lqgnn
	de63ce128d965       a0bf559e280cf                                                                                         6 minutes ago       Exited              kube-proxy                0                   eeebabd6443e5       kube-proxy-ztwp2
	f2d573001dd7e       c7aad43836fa5                                                                                         7 minutes ago       Exited              kube-controller-manager   0                   8034e5ae8850e       kube-controller-manager-default-k8s-diff-port-062300
	4cd223df9b05c       c42f13656d0b2                                                                                         7 minutes ago       Exited              kube-apiserver            0                   d7971dc1c6fc5       kube-apiserver-default-k8s-diff-port-062300
	d5fef62ceb9a3       259c8277fcbbc                                                                                         7 minutes ago       Exited              kube-scheduler            0                   6ff4b37ae8e9f       kube-scheduler-default-k8s-diff-port-062300
	d42428e694b5c       3861cfcd7c04c                                                                                         7 minutes ago       Exited              etcd                      0                   97cbb880cdb83       etcd-default-k8s-diff-port-062300
	
	
	==> coredns [8fcc6fb728ce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47174 - 41907 "HINFO IN 5898933462255465371.2595947378248807920. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.054238519s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[604374891]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:46:58.495) (total time: 21050ms):
	Trace[604374891]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21050ms (23:47:19.542)
	Trace[604374891]: [21.05066262s] [21.05066262s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[352732049]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:46:58.494) (total time: 21051ms):
	Trace[352732049]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21050ms (23:47:19.542)
	Trace[352732049]: [21.05139754s] [21.05139754s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1622575344]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:46:58.497) (total time: 21049ms):
	Trace[1622575344]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21048ms (23:47:19.542)
	Trace[1622575344]: [21.04939462s] [21.04939462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [f8616bb48bd4] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[965457935]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:45:28.185) (total time: 21032ms):
	Trace[965457935]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21031ms (23:45:49.213)
	Trace[965457935]: [21.032218695s] [21.032218695s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1052278361]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:45:28.185) (total time: 21032ms):
	Trace[1052278361]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21031ms (23:45:49.213)
	Trace[1052278361]: [21.032102279s] [21.032102279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1496653283]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-May-2024 23:45:28.185) (total time: 21032ms):
	Trace[1496653283]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21031ms (23:45:49.213)
	Trace[1496653283]: [21.032259402s] [21.032259402s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:60908 - 12213 "HINFO IN 3758755791337204650.2829984788687160444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.06739987s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[May13 23:36] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [018718677fa4] <==
	{"level":"info","ts":"2024-05-13T23:48:30.681344Z","caller":"traceutil/trace.go:171","msg":"trace[1483908581] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl; range_end:; response_count:1; response_revision:741; }","duration":"396.831515ms","start":"2024-05-13T23:48:30.284497Z","end":"2024-05-13T23:48:30.681328Z","steps":["trace[1483908581] 'agreement among raft nodes before linearized reading'  (duration: 396.638486ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:48:30.681381Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:48:30.284479Z","time spent":"396.892425ms","remote":"127.0.0.1:36422","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":1,"response size":4111,"request content":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl\" "}
	{"level":"warn","ts":"2024-05-13T23:48:30.681384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.65030255s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-13T23:48:30.681497Z","caller":"traceutil/trace.go:171","msg":"trace[723033882] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:741; }","duration":"1.650361159s","start":"2024-05-13T23:48:29.031053Z","end":"2024-05-13T23:48:30.681415Z","steps":["trace[723033882] 'agreement among raft nodes before linearized reading'  (duration: 1.650295049s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:48:30.681673Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:48:29.031047Z","time spent":"1.650551388s","remote":"127.0.0.1:36240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-05-13T23:48:30.681692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.646287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-qgl9s.17cf31251ac5d369\" ","response":"range_response_count:1 size:804"}
	{"level":"warn","ts":"2024-05-13T23:48:30.68138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.036742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" ","response":"range_response_count:1 size:4247"}
	{"level":"info","ts":"2024-05-13T23:48:30.682018Z","caller":"traceutil/trace.go:171","msg":"trace[1121107047] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s; range_end:; response_count:1; response_revision:741; }","duration":"395.617131ms","start":"2024-05-13T23:48:30.286319Z","end":"2024-05-13T23:48:30.681937Z","steps":["trace[1121107047] 'agreement among raft nodes before linearized reading'  (duration: 394.980434ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:48:30.682069Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:48:30.286312Z","time spent":"395.741649ms","remote":"127.0.0.1:36422","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4270,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" "}
	{"level":"info","ts":"2024-05-13T23:48:30.681909Z","caller":"traceutil/trace.go:171","msg":"trace[730874148] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-qgl9s.17cf31251ac5d369; range_end:; response_count:1; response_revision:741; }","duration":"396.771606ms","start":"2024-05-13T23:48:30.285018Z","end":"2024-05-13T23:48:30.681789Z","steps":["trace[730874148] 'agreement among raft nodes before linearized reading'  (duration: 396.383247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:48:30.682189Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:48:30.285009Z","time spent":"397.165166ms","remote":"127.0.0.1:36294","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":827,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-qgl9s.17cf31251ac5d369\" "}
	{"level":"info","ts":"2024-05-13T23:49:31.265527Z","caller":"traceutil/trace.go:171","msg":"trace[275386241] transaction","detail":"{read_only:false; response_revision:813; number_of_response:1; }","duration":"381.547847ms","start":"2024-05-13T23:49:30.883476Z","end":"2024-05-13T23:49:31.265024Z","steps":["trace[275386241] 'process raft request'  (duration: 381.398832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:49:31.265662Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.02781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" ","response":"range_response_count:1 size:4247"}
	{"level":"warn","ts":"2024-05-13T23:49:31.265701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:49:30.883464Z","time spent":"382.157206ms","remote":"127.0.0.1:36404","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:811 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-05-13T23:49:31.265713Z","caller":"traceutil/trace.go:171","msg":"trace[1286711214] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s; range_end:; response_count:1; response_revision:813; }","duration":"291.116719ms","start":"2024-05-13T23:49:30.974584Z","end":"2024-05-13T23:49:31.265701Z","steps":["trace[1286711214] 'agreement among raft nodes before linearized reading'  (duration: 290.948603ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:49:31.265528Z","caller":"traceutil/trace.go:171","msg":"trace[1409489639] linearizableReadLoop","detail":"{readStateIndex:872; appliedIndex:872; }","duration":"290.892897ms","start":"2024-05-13T23:49:30.974616Z","end":"2024-05-13T23:49:31.265508Z","steps":["trace[1409489639] 'read index received'  (duration: 290.884696ms)","trace[1409489639] 'applied index is now lower than readState.Index'  (duration: 6.601µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T23:49:32.371722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"894.817853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" ","response":"range_response_count:1 size:4247"}
	{"level":"info","ts":"2024-05-13T23:49:32.371861Z","caller":"traceutil/trace.go:171","msg":"trace[1971644184] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s; range_end:; response_count:1; response_revision:813; }","duration":"894.990669ms","start":"2024-05-13T23:49:31.476851Z","end":"2024-05-13T23:49:32.371841Z","steps":["trace[1971644184] 'range keys from in-memory index tree'  (duration: 894.655537ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:49:32.371914Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:49:31.476836Z","time spent":"895.062777ms","remote":"127.0.0.1:36422","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4270,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-qgl9s\" "}
	{"level":"info","ts":"2024-05-13T23:49:33.41259Z","caller":"traceutil/trace.go:171","msg":"trace[222600303] linearizableReadLoop","detail":"{readStateIndex:873; appliedIndex:872; }","duration":"139.785746ms","start":"2024-05-13T23:49:33.272786Z","end":"2024-05-13T23:49:33.412572Z","steps":["trace[222600303] 'read index received'  (duration: 132.360021ms)","trace[222600303] 'applied index is now lower than readState.Index'  (duration: 7.423925ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T23:49:33.412782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.969464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-05-13T23:49:33.412828Z","caller":"traceutil/trace.go:171","msg":"trace[1878700955] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:813; }","duration":"140.03177ms","start":"2024-05-13T23:49:33.272781Z","end":"2024-05-13T23:49:33.412812Z","steps":["trace[1878700955] 'agreement among raft nodes before linearized reading'  (duration: 139.861054ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:51:40.479969Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"3.845794921s","expected-duration":"1s"}
	{"level":"info","ts":"2024-05-13T23:51:40.481053Z","caller":"traceutil/trace.go:171","msg":"trace[1797797404] transaction","detail":"{read_only:false; response_revision:916; number_of_response:1; }","duration":"3.847205333s","start":"2024-05-13T23:51:36.633826Z","end":"2024-05-13T23:51:40.481031Z","steps":["trace[1797797404] 'process raft request'  (duration: 3.846876184s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:51:40.481395Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:51:36.63379Z","time spent":"3.847412864s","remote":"127.0.0.1:36404","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:915 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> etcd [d42428e694b5] <==
	{"level":"info","ts":"2024-05-13T23:45:09.900972Z","caller":"traceutil/trace.go:171","msg":"trace[1060432605] transaction","detail":"{read_only:false; response_revision:240; number_of_response:1; }","duration":"104.633781ms","start":"2024-05-13T23:45:09.796269Z","end":"2024-05-13T23:45:09.900903Z","steps":["trace[1060432605] 'process raft request'  (duration: 90.156142ms)","trace[1060432605] 'compare'  (duration: 14.187094ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:09.901144Z","caller":"traceutil/trace.go:171","msg":"trace[653386246] transaction","detail":"{read_only:false; response_revision:242; number_of_response:1; }","duration":"101.785641ms","start":"2024-05-13T23:45:09.799344Z","end":"2024-05-13T23:45:09.90113Z","steps":["trace[653386246] 'process raft request'  (duration: 101.501797ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:45:09.901135Z","caller":"traceutil/trace.go:171","msg":"trace[715200547] linearizableReadLoop","detail":"{readStateIndex:247; appliedIndex:245; }","duration":"102.581164ms","start":"2024-05-13T23:45:09.798543Z","end":"2024-05-13T23:45:09.901124Z","steps":["trace[715200547] 'read index received'  (duration: 87.891992ms)","trace[715200547] 'applied index is now lower than readState.Index'  (duration: 14.687972ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:09.901515Z","caller":"traceutil/trace.go:171","msg":"trace[924140122] transaction","detail":"{read_only:false; response_revision:241; number_of_response:1; }","duration":"105.156362ms","start":"2024-05-13T23:45:09.796341Z","end":"2024-05-13T23:45:09.901497Z","steps":["trace[924140122] 'process raft request'  (duration: 104.404545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:45:09.903018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.934328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/default-k8s-diff-port-062300\" ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2024-05-13T23:45:09.903075Z","caller":"traceutil/trace.go:171","msg":"trace[1306332714] range","detail":"{range_begin:/registry/csinodes/default-k8s-diff-port-062300; range_end:; response_count:1; response_revision:243; }","duration":"105.025041ms","start":"2024-05-13T23:45:09.798035Z","end":"2024-05-13T23:45:09.903061Z","steps":["trace[1306332714] 'agreement among raft nodes before linearized reading'  (duration: 103.299074ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:45:23.708604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.977038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-05-13T23:45:23.708783Z","caller":"traceutil/trace.go:171","msg":"trace[1498049944] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:331; }","duration":"108.191971ms","start":"2024-05-13T23:45:23.60057Z","end":"2024-05-13T23:45:23.708762Z","steps":["trace[1498049944] 'agreement among raft nodes before linearized reading'  (duration: 79.185694ms)","trace[1498049944] 'range keys from in-memory index tree'  (duration: 28.797544ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:23.708899Z","caller":"traceutil/trace.go:171","msg":"trace[621709282] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"107.650687ms","start":"2024-05-13T23:45:23.601226Z","end":"2024-05-13T23:45:23.708877Z","steps":["trace[621709282] 'process raft request'  (duration: 88.548772ms)","trace[621709282] 'compare'  (duration: 18.469715ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:23.709053Z","caller":"traceutil/trace.go:171","msg":"trace[901368942] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"107.794009ms","start":"2024-05-13T23:45:23.601242Z","end":"2024-05-13T23:45:23.709036Z","steps":["trace[901368942] 'process raft request'  (duration: 107.497462ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:45:24.024454Z","caller":"traceutil/trace.go:171","msg":"trace[1139136533] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"119.978331ms","start":"2024-05-13T23:45:23.904449Z","end":"2024-05-13T23:45:24.024427Z","steps":["trace[1139136533] 'process raft request'  (duration: 119.740094ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:45:24.024851Z","caller":"traceutil/trace.go:171","msg":"trace[490703960] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"120.585728ms","start":"2024-05-13T23:45:23.904249Z","end":"2024-05-13T23:45:24.024835Z","steps":["trace[490703960] 'process raft request'  (duration: 103.61465ms)","trace[490703960] 'compare'  (duration: 15.006668ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:24.383347Z","caller":"traceutil/trace.go:171","msg":"trace[2061403002] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"104.381571ms","start":"2024-05-13T23:45:24.27889Z","end":"2024-05-13T23:45:24.383271Z","steps":["trace[2061403002] 'process raft request'  (duration: 26.80563ms)","trace[2061403002] 'compare'  (duration: 77.111067ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:26.929399Z","caller":"traceutil/trace.go:171","msg":"trace[846336472] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"108.687737ms","start":"2024-05-13T23:45:26.820562Z","end":"2024-05-13T23:45:26.92925Z","steps":["trace[846336472] 'process raft request'  (duration: 58.165065ms)","trace[846336472] 'compare'  (duration: 50.270937ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:45:27.998646Z","caller":"traceutil/trace.go:171","msg":"trace[1512643704] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"110.563357ms","start":"2024-05-13T23:45:27.888049Z","end":"2024-05-13T23:45:27.998612Z","steps":["trace[1512643704] 'process raft request'  (duration: 90.143839ms)","trace[1512643704] 'compare'  (duration: 20.190987ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T23:46:11.068429Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-13T23:46:11.068778Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"default-k8s-diff-port-062300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.130.2:2380"],"advertise-client-urls":["https://192.168.130.2:2379"]}
	{"level":"warn","ts":"2024-05-13T23:46:11.068969Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T23:46:11.069347Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T23:46:11.268299Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.130.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T23:46:11.268456Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.130.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-13T23:46:11.268546Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cb2ab03b8dc334b","current-leader-member-id":"cb2ab03b8dc334b"}
	{"level":"info","ts":"2024-05-13T23:46:11.367709Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.130.2:2380"}
	{"level":"info","ts":"2024-05-13T23:46:11.367967Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.130.2:2380"}
	{"level":"info","ts":"2024-05-13T23:46:11.367991Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"default-k8s-diff-port-062300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.130.2:2380"],"advertise-client-urls":["https://192.168.130.2:2379"]}
	
	
	==> kernel <==
	 23:52:15 up  2:55,  0 users,  load average: 6.47, 6.58, 5.96
	Linux default-k8s-diff-port-062300 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [4cd223df9b05] <==
	W0513 23:46:20.694713       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.736164       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.736215       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.754579       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.769638       1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.788756       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.799758       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.815845       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.816050       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.821270       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.827330       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.838376       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.846372       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.850939       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.887240       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.928592       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.959483       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:20.996734       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.023154       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.072389       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.088600       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.095237       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.102360       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.118628       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 23:46:21.169513       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c12856199872] <==
	Trace[1546878091]: ---"Object stored in database" 1260ms (23:48:30.279)
	Trace[1546878091]: [4.80141009s] [4.80141009s] END
	I0513 23:48:30.280039       1 trace.go:236] Trace[889961165]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:174180f9-42ae-4805-ace9-8c9e3acf1da7,client:192.168.130.2,api-group:,api-version:v1,name:metrics-server-569cc877fc-qgl9s,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/metrics-server-569cc877fc-qgl9s/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (13-May-2024 23:48:29.024) (total time: 1255ms):
	Trace[889961165]: ["GuaranteedUpdate etcd3" audit-id:174180f9-42ae-4805-ace9-8c9e3acf1da7,key:/pods/kube-system/metrics-server-569cc877fc-qgl9s,type:*core.Pod,resource:pods 1255ms (23:48:29.024)
	Trace[889961165]:  ---"Txn call completed" 1251ms (23:48:30.279)]
	Trace[889961165]: ---"Object stored in database" 1251ms (23:48:30.279)
	Trace[889961165]: [1.255437733s] [1.255437733s] END
	I0513 23:48:30.280640       1 trace.go:236] Trace[765578303]: "Get" accept:application/json, */*,audit-id:1f54dd2f-62bd-4fac-960a-37720da63fa1,client:192.168.130.1,api-group:,api-version:v1,name:default-k8s-diff-port-062300,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/default-k8s-diff-port-062300,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (13-May-2024 23:48:29.025) (total time: 1255ms):
	Trace[765578303]: ---"About to write a response" 1254ms (23:48:30.279)
	Trace[765578303]: [1.255527447s] [1.255527447s] END
	I0513 23:48:30.683069       1 trace.go:236] Trace[599239996]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.130.2,type:*v1.Endpoints,resource:apiServerIPInfo (13-May-2024 23:48:28.065) (total time: 2617ms):
	Trace[599239996]: ---"initial value restored" 952ms (23:48:29.017)
	Trace[599239996]: ---"Transaction prepared" 742ms (23:48:29.759)
	Trace[599239996]: ---"Txn call completed" 923ms (23:48:30.682)
	Trace[599239996]: [2.617734536s] [2.617734536s] END
	I0513 23:49:32.374365       1 trace.go:236] Trace[182514630]: "Get" accept:application/json, */*,audit-id:9afd0261-ce11-4b70-8089-d7eae6ef2500,client:192.168.130.1,api-group:,api-version:v1,name:metrics-server-569cc877fc-qgl9s,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/metrics-server-569cc877fc-qgl9s,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (13-May-2024 23:49:31.475) (total time: 898ms):
	Trace[182514630]: ---"About to write a response" 898ms (23:49:32.373)
	Trace[182514630]: [898.493212ms] [898.493212ms] END
	W0513 23:49:52.387333       1 handler_proxy.go:93] no RequestInfo found in the context
	E0513 23:49:52.387611       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0513 23:49:52.387626       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0513 23:49:52.389778       1 handler_proxy.go:93] no RequestInfo found in the context
	E0513 23:49:52.390019       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0513 23:49:52.390031       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [95b301adb6c0] <==
	I0513 23:48:11.513060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="56.609µs"
	I0513 23:48:16.514410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="74.712µs"
	I0513 23:48:30.280519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="184.728µs"
	I0513 23:48:30.760639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="164.225µs"
	E0513 23:48:40.020973       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:48:40.510138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0513 23:48:43.499529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="68.811µs"
	I0513 23:49:03.520584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="77.211µs"
	E0513 23:49:10.025176       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:49:10.517869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0513 23:49:11.494194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="45.608µs"
	I0513 23:49:15.512558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="263.841µs"
	I0513 23:49:25.503350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="72.311µs"
	E0513 23:49:40.026728       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:49:40.523680       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0513 23:50:10.040372       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:50:10.534426       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0513 23:50:34.498558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="387.75µs"
	E0513 23:50:40.044580       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:50:40.545353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0513 23:50:41.477355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="128.219µs"
	I0513 23:50:47.482291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="141.821µs"
	I0513 23:50:55.493409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-654756847f" duration="137.721µs"
	E0513 23:51:10.066780       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0513 23:51:10.564584       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [f2d573001dd7] <==
	I0513 23:45:22.795006       1 shared_informer.go:320] Caches are synced for resource quota
	I0513 23:45:22.810374       1 shared_informer.go:320] Caches are synced for expand
	I0513 23:45:23.170807       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 23:45:23.170924       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0513 23:45:23.181352       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 23:45:24.178791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="852.14026ms"
	I0513 23:45:24.386910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="207.920508ms"
	I0513 23:45:24.502329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="115.220581ms"
	I0513 23:45:24.502505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.31µs"
	I0513 23:45:25.581756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="240.981325ms"
	I0513 23:45:25.614224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.309398ms"
	I0513 23:45:25.780863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="166.559382ms"
	I0513 23:45:25.781018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.314µs"
	I0513 23:45:27.978907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="211.629µs"
	I0513 23:45:28.408138       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.41µs"
	I0513 23:45:38.202464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.814µs"
	I0513 23:45:38.530709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.308µs"
	I0513 23:45:38.551619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.31µs"
	I0513 23:45:38.579785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="166.619µs"
	I0513 23:45:54.532453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.975116ms"
	I0513 23:45:54.532713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.81µs"
	I0513 23:46:08.998660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="71.682462ms"
	I0513 23:46:09.068868       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="70.145416ms"
	I0513 23:46:09.069414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="58.709µs"
	I0513 23:46:09.097753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="60.71µs"
	
	
	==> kube-proxy [55e8f1e4d6d5] <==
	I0513 23:46:57.783625       1 server_linux.go:69] "Using iptables proxy"
	I0513 23:46:57.818220       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.130.2"]
	I0513 23:46:57.976354       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0513 23:46:57.976515       1 server_linux.go:165] "Using iptables Proxier"
	I0513 23:46:57.983085       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0513 23:46:57.983209       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0513 23:46:57.983277       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 23:46:57.984459       1 server.go:872] "Version info" version="v1.30.0"
	I0513 23:46:57.984514       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:46:57.992485       1 config.go:101] "Starting endpoint slice config controller"
	I0513 23:46:57.993081       1 config.go:319] "Starting node config controller"
	I0513 23:46:57.993523       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 23:46:57.992574       1 config.go:192] "Starting service config controller"
	I0513 23:46:57.993851       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 23:46:57.994386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 23:46:58.093785       1 shared_informer.go:320] Caches are synced for node config
	I0513 23:46:58.094244       1 shared_informer.go:320] Caches are synced for service config
	I0513 23:46:58.094678       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de63ce128d96] <==
	I0513 23:45:27.304314       1 server_linux.go:69] "Using iptables proxy"
	I0513 23:45:27.334145       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.130.2"]
	I0513 23:45:27.499214       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0513 23:45:27.499439       1 server_linux.go:165] "Using iptables Proxier"
	I0513 23:45:27.507782       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0513 23:45:27.507921       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0513 23:45:27.507949       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 23:45:27.508510       1 server.go:872] "Version info" version="v1.30.0"
	I0513 23:45:27.508566       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:45:27.515371       1 config.go:192] "Starting service config controller"
	I0513 23:45:27.515503       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 23:45:27.515556       1 config.go:101] "Starting endpoint slice config controller"
	I0513 23:45:27.515705       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 23:45:27.517322       1 config.go:319] "Starting node config controller"
	I0513 23:45:27.517340       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 23:45:27.679165       1 shared_informer.go:320] Caches are synced for node config
	I0513 23:45:27.615944       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 23:45:27.679942       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [00800d2623fc] <==
	I0513 23:46:47.108205       1 serving.go:380] Generated self-signed cert in-memory
	W0513 23:46:51.269828       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0513 23:46:51.281776       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 23:46:51.367720       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0513 23:46:51.370325       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0513 23:46:51.573692       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0513 23:46:51.573727       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:46:51.577687       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0513 23:46:51.577955       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0513 23:46:51.577977       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 23:46:51.578010       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0513 23:46:51.678068       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d5fef62ceb9a] <==
	E0513 23:45:06.398524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 23:45:06.542412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 23:45:06.542538       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 23:45:06.543332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 23:45:06.543448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0513 23:45:06.545866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 23:45:06.545968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0513 23:45:06.604824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0513 23:45:06.605003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0513 23:45:06.627960       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:06.628156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0513 23:45:06.668195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:06.668304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0513 23:45:06.859654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 23:45:06.859764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0513 23:45:06.882283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0513 23:45:06.882489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0513 23:45:06.909664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 23:45:06.910000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0513 23:45:08.057746       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 23:45:08.057901       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0513 23:45:13.906545       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 23:46:11.067919       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0513 23:46:11.068456       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0513 23:46:11.068472       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 13 23:50:05 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:05.464911    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:11 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:11.470239    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:50:18 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:18.464953    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:22 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:22.520817    1471 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 13 23:50:22 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:22.520955    1471 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 13 23:50:22 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:22.521226    1471 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hws4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-qgl9s_kube-system(60a7c560-5161-4bf4-9c27-0114cb776da6): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	May 13 23:50:22 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:22.521264    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:50:29 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:29.944231    1471 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	May 13 23:50:29 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:29.944302    1471 kuberuntime_image.go:55] "Failed to pull image" err="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	May 13 23:50:29 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:29.944621    1471 kuberuntime_manager.go:1256] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bms8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10
,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-654756847f-rfwsl_kubernetes-dashboard(44a20794-3cf6-4968-adb5-ef86df25b880): ErrImagePull: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the ima
ge to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	May 13 23:50:29 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:29.944686    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:34 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:34.471021    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:50:41 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:41.460682    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:47 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:47.461333    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:50:55 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:55.461540    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:50:58 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:50:58.466453    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:51:10 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:10.458524    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:51:10 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:10.458632    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:51:24 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:24.459047    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:51:25 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:25.457006    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-654756847f-rfwsl" podUID="44a20794-3cf6-4968-adb5-ef86df25b880"
	May 13 23:51:36 default-k8s-diff-port-062300 kubelet[1471]: E0513 23:51:36.468741    1471 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-qgl9s" podUID="60a7c560-5161-4bf4-9c27-0114cb776da6"
	May 13 23:51:36 default-k8s-diff-port-062300 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 13 23:51:36 default-k8s-diff-port-062300 kubelet[1471]: I0513 23:51:36.669136    1471 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	May 13 23:51:36 default-k8s-diff-port-062300 systemd[1]: kubelet.service: Deactivated successfully.
	May 13 23:51:36 default-k8s-diff-port-062300 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a91aebc02bd9] <==
	2024/05/13 23:47:34 Starting overwatch
	2024/05/13 23:47:34 Using namespace: kubernetes-dashboard
	2024/05/13 23:47:34 Using in-cluster config to connect to apiserver
	2024/05/13 23:47:34 Using secret token for csrf signing
	2024/05/13 23:47:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/05/13 23:47:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/05/13 23:47:34 Successful initial request to the apiserver, version: v1.30.0
	2024/05/13 23:47:34 Generating JWE encryption key
	2024/05/13 23:47:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/05/13 23:47:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/05/13 23:47:35 Initializing JWE encryption key from synchronized object
	2024/05/13 23:47:35 Creating in-cluster Sidecar client
	2024/05/13 23:47:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:47:35 Serving insecurely on HTTP port: 9090
	2024/05/13 23:48:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:48:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:49:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:49:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:50:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:50:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:51:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/13 23:51:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [889b473e730e] <==
	I0513 23:47:33.974594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0513 23:47:33.995668       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0513 23:47:33.996404       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0513 23:47:51.448284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0513 23:47:51.448869       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-062300_50754299-4093-42c0-8e09-2d0748e282ef!
	I0513 23:47:51.448867       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1224bdee-850c-49b7-90b1-e35e23f8e17a", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-062300_50754299-4093-42c0-8e09-2d0748e282ef became leader
	I0513 23:47:51.550436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-062300_50754299-4093-42c0-8e09-2d0748e282ef!
	
	
	==> storage-provisioner [ee111c323bdc] <==
	I0513 23:46:56.775399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0513 23:47:17.830173       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:52:03.140246    9160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300: exit status 2 (1.8309769s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:52:17.549841    7104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "default-k8s-diff-port-062300" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (44.28s)

                                                
                                    

Test pass (309/339)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.1
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.66
9 TestDownloadOnly/v1.20.0/DeleteAll 2.63
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.26
12 TestDownloadOnly/v1.30.0/json-events 7.7
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.57
18 TestDownloadOnly/v1.30.0/DeleteAll 1.98
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.08
20 TestDownloadOnlyKic 3.7
21 TestBinaryMirror 3.44
22 TestOffline 171.19
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.26
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.26
27 TestAddons/Setup 485.14
31 TestAddons/parallel/InspektorGadget 14.36
32 TestAddons/parallel/MetricsServer 7.22
33 TestAddons/parallel/HelmTiller 24.92
35 TestAddons/parallel/CSI 87.39
36 TestAddons/parallel/Headlamp 34.34
37 TestAddons/parallel/CloudSpanner 7.8
38 TestAddons/parallel/LocalPath 104.14
39 TestAddons/parallel/NvidiaDevicePlugin 8
40 TestAddons/parallel/Yakd 6.03
43 TestAddons/serial/GCPAuth/Namespaces 0.33
44 TestAddons/StoppedEnableDisable 14.27
45 TestCertOptions 100.71
46 TestCertExpiration 320.95
47 TestDockerFlags 78.2
48 TestForceSystemdFlag 161.69
49 TestForceSystemdEnv 96.57
56 TestErrorSpam/start 3.75
57 TestErrorSpam/status 3.65
58 TestErrorSpam/pause 3.82
59 TestErrorSpam/unpause 5.24
60 TestErrorSpam/stop 20.13
63 TestFunctional/serial/CopySyncFile 0.03
64 TestFunctional/serial/StartWithProxy 84.31
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 41.71
67 TestFunctional/serial/KubeContext 0.13
68 TestFunctional/serial/KubectlGetPods 0.23
71 TestFunctional/serial/CacheCmd/cache/add_remote 6.78
72 TestFunctional/serial/CacheCmd/cache/add_local 4.21
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.25
74 TestFunctional/serial/CacheCmd/cache/list 0.25
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.17
76 TestFunctional/serial/CacheCmd/cache/cache_reload 5.15
77 TestFunctional/serial/CacheCmd/cache/delete 0.49
78 TestFunctional/serial/MinikubeKubectlCmd 0.51
80 TestFunctional/serial/ExtraConfig 52.27
81 TestFunctional/serial/ComponentHealth 0.18
82 TestFunctional/serial/LogsCmd 2.59
83 TestFunctional/serial/LogsFileCmd 2.77
84 TestFunctional/serial/InvalidService 5.79
88 TestFunctional/parallel/DryRun 2.9
89 TestFunctional/parallel/InternationalLanguage 1.1
90 TestFunctional/parallel/StatusCmd 4.29
95 TestFunctional/parallel/AddonsCmd 0.72
96 TestFunctional/parallel/PersistentVolumeClaim 104.05
98 TestFunctional/parallel/SSHCmd 3.24
99 TestFunctional/parallel/CpCmd 7.76
100 TestFunctional/parallel/MySQL 73.65
101 TestFunctional/parallel/FileSync 1.25
102 TestFunctional/parallel/CertSync 8.15
106 TestFunctional/parallel/NodeLabels 0.19
108 TestFunctional/parallel/NonActiveRuntimeDisabled 1.23
110 TestFunctional/parallel/License 3.53
111 TestFunctional/parallel/Version/short 0.24
112 TestFunctional/parallel/Version/components 2.34
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.86
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.81
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.77
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.82
117 TestFunctional/parallel/ImageCommands/ImageBuild 11.01
118 TestFunctional/parallel/ImageCommands/Setup 4.65
119 TestFunctional/parallel/ServiceCmd/DeployApp 22.65
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 13.77
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.97
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 23.9
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.92
127 TestFunctional/parallel/ServiceCmd/List 1.49
128 TestFunctional/parallel/ServiceCmd/JSONOutput 1.67
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 17.39
130 TestFunctional/parallel/ServiceCmd/HTTPS 15.03
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.21
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.23
137 TestFunctional/parallel/ServiceCmd/Format 15.03
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.56
139 TestFunctional/parallel/ProfileCmd/profile_not_create 1.96
140 TestFunctional/parallel/ImageCommands/ImageRemove 1.63
141 TestFunctional/parallel/ProfileCmd/profile_list 1.57
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.65
143 TestFunctional/parallel/ProfileCmd/profile_json_output 1.68
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 5.81
145 TestFunctional/parallel/ServiceCmd/URL 15.01
146 TestFunctional/parallel/DockerEnv/powershell 8.3
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.62
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.61
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.59
150 TestFunctional/delete_addon-resizer_images 0.44
151 TestFunctional/delete_my-image_image 0.18
152 TestFunctional/delete_minikube_cached_images 0.17
156 TestMultiControlPlane/serial/StartCluster 229.32
157 TestMultiControlPlane/serial/DeployApp 13.25
158 TestMultiControlPlane/serial/PingHostFromPods 3.49
159 TestMultiControlPlane/serial/AddWorkerNode 57.38
160 TestMultiControlPlane/serial/NodeLabels 0.19
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 3.4
162 TestMultiControlPlane/serial/CopyFile 69.04
163 TestMultiControlPlane/serial/StopSecondaryNode 15.43
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 2.49
165 TestMultiControlPlane/serial/RestartSecondaryNode 149.33
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.39
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 241.41
168 TestMultiControlPlane/serial/DeleteSecondaryNode 22.83
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.51
170 TestMultiControlPlane/serial/StopCluster 37.38
171 TestMultiControlPlane/serial/RestartCluster 110.19
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.25
173 TestMultiControlPlane/serial/AddSecondaryNode 76.6
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 3.36
177 TestImageBuild/serial/Setup 66.65
178 TestImageBuild/serial/NormalBuild 4.13
179 TestImageBuild/serial/BuildWithBuildArg 2.81
180 TestImageBuild/serial/BuildWithDockerIgnore 1.97
181 TestImageBuild/serial/BuildWithSpecifiedDockerfile 2.44
185 TestJSONOutput/start/Command 110.14
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 1.63
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 1.43
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.69
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 1.38
210 TestKicCustomNetwork/create_custom_network 74.73
211 TestKicCustomNetwork/use_default_bridge_network 73.47
212 TestKicExistingNetwork 75.29
213 TestKicCustomSubnet 77.39
214 TestKicStaticIP 75.46
215 TestMainNoArgs 0.23
216 TestMinikubeProfile 140.49
219 TestMountStart/serial/StartWithMountFirst 19.43
220 TestMountStart/serial/VerifyMountFirst 1.1
221 TestMountStart/serial/StartWithMountSecond 19.32
222 TestMountStart/serial/VerifyMountSecond 1.07
223 TestMountStart/serial/DeleteFirst 3.84
224 TestMountStart/serial/VerifyMountPostDelete 1.08
225 TestMountStart/serial/Stop 2.46
226 TestMountStart/serial/RestartStopped 13.24
227 TestMountStart/serial/VerifyMountPostStop 1.1
230 TestMultiNode/serial/FreshStart2Nodes 148.01
231 TestMultiNode/serial/DeployApp2Nodes 27.72
232 TestMultiNode/serial/PingHostFrom2Pods 2.38
233 TestMultiNode/serial/AddNode 51.7
234 TestMultiNode/serial/MultiNodeLabels 0.17
235 TestMultiNode/serial/ProfileList 1.45
236 TestMultiNode/serial/CopyFile 38.95
237 TestMultiNode/serial/StopNode 6.31
238 TestMultiNode/serial/StartAfterStop 20.41
239 TestMultiNode/serial/RestartKeepsNodes 123.87
240 TestMultiNode/serial/DeleteNode 13.29
241 TestMultiNode/serial/StopMultiNode 25.09
242 TestMultiNode/serial/RestartMultiNode 51
243 TestMultiNode/serial/ValidateNameConflict 64.03
247 TestPreload 182.08
248 TestScheduledStopWindows 148.23
252 TestInsufficientStorage 45.7
253 TestRunningBinaryUpgrade 246.45
255 TestKubernetesUpgrade 554.4
256 TestMissingContainerUpgrade 389.73
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.3
259 TestNoKubernetes/serial/StartWithK8s 123.95
260 TestNoKubernetes/serial/StartWithStopK8s 44.03
261 TestNoKubernetes/serial/Start 36.18
262 TestStoppedBinaryUpgrade/Setup 1.22
263 TestStoppedBinaryUpgrade/Upgrade 216.11
264 TestNoKubernetes/serial/VerifyK8sNotRunning 1.37
265 TestNoKubernetes/serial/ProfileList 13.14
266 TestNoKubernetes/serial/Stop 3.45
267 TestNoKubernetes/serial/StartNoArgs 21.78
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.44
269 TestStoppedBinaryUpgrade/MinikubeLogs 3.97
278 TestPause/serial/Start 114.92
279 TestPause/serial/SecondStartNoReconfiguration 49.39
280 TestPause/serial/Pause 1.73
281 TestPause/serial/VerifyStatus 1.43
282 TestPause/serial/Unpause 1.55
283 TestPause/serial/PauseAgain 2.05
284 TestPause/serial/DeletePaused 6.35
285 TestPause/serial/VerifyDeletedResources 19.66
298 TestStartStop/group/old-k8s-version/serial/FirstStart 251.59
300 TestStartStop/group/no-preload/serial/FirstStart 149.58
302 TestStartStop/group/embed-certs/serial/FirstStart 109.56
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 129.95
305 TestStartStop/group/embed-certs/serial/DeployApp 11.94
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.6
307 TestStartStop/group/no-preload/serial/DeployApp 9.71
308 TestStartStop/group/embed-certs/serial/Stop 12.88
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.48
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.73
311 TestStartStop/group/no-preload/serial/Stop 13.09
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.2
313 TestStartStop/group/embed-certs/serial/SecondStart 283.06
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.61
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.28
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.23
317 TestStartStop/group/no-preload/serial/SecondStart 298.08
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.15
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.27
320 TestStartStop/group/old-k8s-version/serial/DeployApp 11.38
321 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.84
322 TestStartStop/group/old-k8s-version/serial/Stop 12.9
323 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.05
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.02
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.41
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.86
328 TestStartStop/group/embed-certs/serial/Pause 11.18
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.04
331 TestStartStop/group/newest-cni/serial/FirstStart 107.82
332 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.51
333 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1.29
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.03
335 TestStartStop/group/no-preload/serial/Pause 10.07
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.55
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.08
339 TestNetworkPlugins/group/auto/Start 105.81
340 TestNetworkPlugins/group/kindnet/Start 125.96
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.39
343 TestStartStop/group/newest-cni/serial/Stop 8.2
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.37
345 TestStartStop/group/newest-cni/serial/SecondStart 40.16
346 TestNetworkPlugins/group/auto/KubeletFlags 1.55
347 TestNetworkPlugins/group/auto/NetCatPod 20.92
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.16
351 TestNetworkPlugins/group/auto/DNS 0.57
352 TestNetworkPlugins/group/auto/Localhost 0.43
353 TestStartStop/group/newest-cni/serial/Pause 13.88
354 TestNetworkPlugins/group/auto/HairPin 0.57
355 TestNetworkPlugins/group/calico/Start 221.53
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.04
357 TestNetworkPlugins/group/kindnet/KubeletFlags 1.5
358 TestNetworkPlugins/group/kindnet/NetCatPod 31.94
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.26
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.95
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.21
362 TestNetworkPlugins/group/kindnet/DNS 0.52
363 TestNetworkPlugins/group/kindnet/Localhost 0.5
364 TestStartStop/group/old-k8s-version/serial/Pause 13.75
365 TestNetworkPlugins/group/kindnet/HairPin 0.54
366 TestNetworkPlugins/group/custom-flannel/Start 151.2
367 TestNetworkPlugins/group/false/Start 139.7
368 TestNetworkPlugins/group/enable-default-cni/Start 115.84
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.58
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 20.88
371 TestNetworkPlugins/group/false/KubeletFlags 1.47
372 TestNetworkPlugins/group/calico/ControllerPod 6.04
373 TestNetworkPlugins/group/false/NetCatPod 22.74
374 TestNetworkPlugins/group/custom-flannel/DNS 0.64
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.58
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.57
377 TestNetworkPlugins/group/calico/KubeletFlags 1.85
378 TestNetworkPlugins/group/calico/NetCatPod 25.8
379 TestNetworkPlugins/group/false/DNS 0.54
380 TestNetworkPlugins/group/false/Localhost 0.45
381 TestNetworkPlugins/group/false/HairPin 0.52
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.64
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 26.04
384 TestNetworkPlugins/group/calico/DNS 0.67
385 TestNetworkPlugins/group/calico/Localhost 0.72
386 TestNetworkPlugins/group/calico/HairPin 0.74
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.65
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.63
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.61
390 TestNetworkPlugins/group/flannel/Start 142.95
391 TestNetworkPlugins/group/bridge/Start 101.82
392 TestNetworkPlugins/group/kubenet/Start 128.31
393 TestNetworkPlugins/group/bridge/KubeletFlags 2.03
394 TestNetworkPlugins/group/bridge/NetCatPod 19.82
395 TestNetworkPlugins/group/bridge/DNS 0.35
396 TestNetworkPlugins/group/bridge/Localhost 0.38
397 TestNetworkPlugins/group/bridge/HairPin 0.34
398 TestNetworkPlugins/group/flannel/ControllerPod 6.03
399 TestNetworkPlugins/group/flannel/KubeletFlags 1.34
400 TestNetworkPlugins/group/flannel/NetCatPod 23.76
401 TestNetworkPlugins/group/kubenet/KubeletFlags 1.57
402 TestNetworkPlugins/group/kubenet/NetCatPod 22.8
403 TestNetworkPlugins/group/flannel/DNS 0.47
404 TestNetworkPlugins/group/flannel/Localhost 0.47
405 TestNetworkPlugins/group/flannel/HairPin 0.43
406 TestNetworkPlugins/group/kubenet/DNS 0.77
407 TestNetworkPlugins/group/kubenet/Localhost 0.41
408 TestNetworkPlugins/group/kubenet/HairPin 0.51
x
+
TestDownloadOnly/v1.20.0/json-events (11.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-450300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-450300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (11.1028845s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-450300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-450300: exit status 85 (659.1716ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-450300 | minikube4\jenkins | v1.33.1 | 13 May 24 22:21 UTC |          |
	|         | -p download-only-450300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:21:47
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:21:47.798158    9580 out.go:291] Setting OutFile to fd 584 ...
	I0513 22:21:47.799515    9580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:21:47.799582    9580 out.go:304] Setting ErrFile to fd 608...
	I0513 22:21:47.799645    9580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0513 22:21:47.813691    9580 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0513 22:21:47.824059    9580 out.go:298] Setting JSON to true
	I0513 22:21:47.826557    9580 start.go:129] hostinfo: {"hostname":"minikube4","uptime":5146,"bootTime":1715633761,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 22:21:47.827081    9580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:21:47.834057    9580 out.go:97] [download-only-450300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:21:47.834233    9580 notify.go:220] Checking for updates...
	W0513 22:21:47.834233    9580 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0513 22:21:47.836763    9580 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 22:21:47.839432    9580 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 22:21:47.841878    9580 out.go:169] MINIKUBE_LOCATION=18872
	I0513 22:21:47.843973    9580 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0513 22:21:47.850459    9580 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0513 22:21:47.850832    9580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:21:48.118250    9580 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 22:21:48.128590    9580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 22:21:49.517153    9580 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.3885092s)
	I0513 22:21:49.518154    9580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-05-13 22:21:49.48003449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 22:21:49.522162    9580 out.go:97] Using the docker driver based on user configuration
	I0513 22:21:49.522162    9580 start.go:297] selected driver: docker
	I0513 22:21:49.522162    9580 start.go:901] validating driver "docker" against <nil>
	I0513 22:21:49.537155    9580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 22:21:49.845445    9580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-05-13 22:21:49.807771249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 22:21:49.845445    9580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 22:21:49.947274    9580 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0513 22:21:49.948191    9580 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 22:21:49.951305    9580 out.go:169] Using Docker Desktop driver with root privileges
	I0513 22:21:49.953027    9580 cni.go:84] Creating CNI manager for ""
	I0513 22:21:49.953027    9580 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 22:21:49.954091    9580 start.go:340] cluster config:
	{Name:download-only-450300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-450300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:21:49.955606    9580 out.go:97] Starting "download-only-450300" primary control-plane node in "download-only-450300" cluster
	I0513 22:21:49.956607    9580 cache.go:121] Beginning downloading kic base image for docker with docker
	I0513 22:21:49.958905    9580 out.go:97] Pulling base image v0.0.44 ...
	I0513 22:21:49.958905    9580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 22:21:49.959536    9580 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon
	I0513 22:21:49.999316    9580 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0513 22:21:49.999316    9580 cache.go:56] Caching tarball of preloaded images
	I0513 22:21:50.000015    9580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 22:21:50.002206    9580 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0513 22:21:50.002206    9580 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:21:50.063253    9580 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0513 22:21:50.123808    9580 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e to local cache
	I0513 22:21:50.123808    9580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.44@sha256_eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e.tar
	I0513 22:21:50.124516    9580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.44@sha256_eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e.tar
	I0513 22:21:50.124630    9580 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local cache directory
	I0513 22:21:50.125931    9580 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e to local cache
	I0513 22:21:54.311072    9580 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:21:54.312080    9580 preload.go:255] verifying checksum of C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:21:54.501995    9580 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e as a tarball
	I0513 22:21:55.306114    9580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0513 22:21:55.307122    9580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-450300\config.json ...
	I0513 22:21:55.307743    9580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-450300\config.json: {Name:mkfed1474a570a95c97f67d2d47d295751a62485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:21:55.308431    9580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 22:21:55.309634    9580 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-450300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-450300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:21:58.919728   10652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (2.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.625044s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (2.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-450300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-450300: (1.260157s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (7.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-491800 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-491800 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker: (7.7019974s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (7.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-491800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-491800: exit status 85 (572.3741ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-450300 | minikube4\jenkins | v1.33.1 | 13 May 24 22:21 UTC |                     |
	|         | -p download-only-450300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube4\jenkins | v1.33.1 | 13 May 24 22:21 UTC | 13 May 24 22:22 UTC |
	| delete  | -p download-only-450300        | download-only-450300 | minikube4\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| start   | -o=json --download-only        | download-only-491800 | minikube4\jenkins | v1.33.1 | 13 May 24 22:22 UTC |                     |
	|         | -p download-only-491800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:22:03
	Running on machine: minikube4
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:22:03.547339   16144 out.go:291] Setting OutFile to fd 772 ...
	I0513 22:22:03.547930   16144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:22:03.547930   16144 out.go:304] Setting ErrFile to fd 776...
	I0513 22:22:03.547930   16144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:22:03.569256   16144 out.go:298] Setting JSON to true
	I0513 22:22:03.571558   16144 start.go:129] hostinfo: {"hostname":"minikube4","uptime":5162,"bootTime":1715633761,"procs":204,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 22:22:03.571558   16144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:22:03.739060   16144 out.go:97] [download-only-491800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:22:03.739895   16144 notify.go:220] Checking for updates...
	I0513 22:22:03.742067   16144 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 22:22:03.744203   16144 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 22:22:03.746403   16144 out.go:169] MINIKUBE_LOCATION=18872
	I0513 22:22:03.751367   16144 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0513 22:22:03.756194   16144 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0513 22:22:03.757520   16144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:22:04.022883   16144 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 22:22:04.032483   16144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 22:22:04.350485   16144 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-05-13 22:22:04.31193932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 22:22:04.495971   16144 out.go:97] Using the docker driver based on user configuration
	I0513 22:22:04.496854   16144 start.go:297] selected driver: docker
	I0513 22:22:04.496854   16144 start.go:901] validating driver "docker" against <nil>
	I0513 22:22:04.515538   16144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 22:22:04.819093   16144 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-05-13 22:22:04.779332165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 22:22:04.819474   16144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 22:22:04.864957   16144 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0513 22:22:04.866330   16144 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 22:22:04.875585   16144 out.go:169] Using Docker Desktop driver with root privileges
	I0513 22:22:04.880579   16144 cni.go:84] Creating CNI manager for ""
	I0513 22:22:04.880579   16144 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:22:04.880579   16144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 22:22:04.881492   16144 start.go:340] cluster config:
	{Name:download-only-491800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-491800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:22:04.885075   16144 out.go:97] Starting "download-only-491800" primary control-plane node in "download-only-491800" cluster
	I0513 22:22:04.885150   16144 cache.go:121] Beginning downloading kic base image for docker with docker
	I0513 22:22:04.887055   16144 out.go:97] Pulling base image v0.0.44 ...
	I0513 22:22:04.887055   16144 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:22:04.887055   16144 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon
	I0513 22:22:04.930448   16144 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:22:04.930889   16144 cache.go:56] Caching tarball of preloaded images
	I0513 22:22:04.931275   16144 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:22:04.944260   16144 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0513 22:22:04.944260   16144 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:22:05.006988   16144 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:22:05.066715   16144 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e to local cache
	I0513 22:22:05.066833   16144 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.44@sha256_eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e.tar
	I0513 22:22:05.067092   16144 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase_v0.0.44@sha256_eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e.tar
	I0513 22:22:05.067196   16144 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local cache directory
	I0513 22:22:05.067377   16144 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local cache directory, skipping pull
	I0513 22:22:05.067405   16144 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e exists in cache, skipping pull
	I0513 22:22:05.067572   16144 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e as a tarball
	
	
	* The control-plane node download-only-491800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-491800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:22:11.180846   14896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.9831298s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-491800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-491800: (1.0756629s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.08s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.7s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-720600 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-720600 --alsologtostderr --driver=docker: (1.4611584s)
helpers_test.go:175: Cleaning up "download-docker-720600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-720600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-720600: (1.345189s)
--- PASS: TestDownloadOnlyKic (3.70s)

                                                
                                    
x
+
TestBinaryMirror (3.44s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-885500 --alsologtostderr --binary-mirror http://127.0.0.1:51936 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-885500 --alsologtostderr --binary-mirror http://127.0.0.1:51936 --driver=docker: (1.8746394s)
helpers_test.go:175: Cleaning up "binary-mirror-885500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-885500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-885500: (1.3179564s)
--- PASS: TestBinaryMirror (3.44s)

                                                
                                    
x
+
TestOffline (171.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-384200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-384200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (2m42.9716805s)
helpers_test.go:175: Cleaning up "offline-docker-384200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-384200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-384200: (8.2179063s)
--- PASS: TestOffline (171.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.26s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-557700
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-557700: exit status 85 (259.3545ms)

                                                
                                                
-- stdout --
	* Profile "addons-557700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-557700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:22:24.549387    2608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.26s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.26s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-557700
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-557700: exit status 85 (256.3229ms)

                                                
                                                
-- stdout --
	* Profile "addons-557700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-557700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:22:24.549387   10364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.26s)

                                                
                                    
x
+
TestAddons/Setup (485.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-557700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-557700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (8m5.135695s)
--- PASS: TestAddons/Setup (485.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (14.36s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lnxr6" [71c4a54a-3a49-4fb3-b00b-c1aaede5b474] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0192467s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-557700
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-557700: (8.333083s)
--- PASS: TestAddons/parallel/InspektorGadget (14.36s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 80.8163ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-cgftf" [3fefb855-c46b-4569-bce1-76858af78407] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0192272s
addons_test.go:415: (dbg) Run:  kubectl --context addons-557700 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-557700 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-557700 addons disable metrics-server --alsologtostderr -v=1: (1.9472723s)
--- PASS: TestAddons/parallel/MetricsServer (7.22s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (24.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 54.3458ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-fmvws" [4652bd75-6beb-4010-af68-a7ebb186635a] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0241473s
addons_test.go:473: (dbg) Run:  kubectl --context addons-557700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-557700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (16.0103914s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-557700 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-557700 addons disable helm-tiller --alsologtostderr -v=1: (3.8105015s)
--- PASS: TestAddons/parallel/HelmTiller (24.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (87.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 93.483ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-557700 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-557700 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [89c8015a-69dd-4c1f-8813-d510c776997e] Pending
helpers_test.go:344: "task-pv-pod" [89c8015a-69dd-4c1f-8813-d510c776997e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [89c8015a-69dd-4c1f-8813-d510c776997e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 37.0190648s
addons_test.go:584: (dbg) Run:  kubectl --context addons-557700 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-557700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-557700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-557700 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-557700 delete pod task-pv-pod: (1.8610155s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-557700 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-557700 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-557700 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:616: (dbg) Done: kubectl --context addons-557700 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml: (1.169125s)
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [62a8ad08-cc02-4c12-8693-749f1e23cad7] Pending
helpers_test.go:344: "task-pv-pod-restore" [62a8ad08-cc02-4c12-8693-749f1e23cad7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [62a8ad08-cc02-4c12-8693-749f1e23cad7] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0198288s
addons_test.go:626: (dbg) Run:  kubectl --context addons-557700 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-557700 delete pod task-pv-pod-restore: (1.5740062s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-557700 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-557700 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-557700 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-557700 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.3041299s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-557700 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-557700 addons disable volumesnapshots --alsologtostderr -v=1: (2.2026678s)
--- PASS: TestAddons/parallel/CSI (87.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (34.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-557700 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-557700 --alsologtostderr -v=1: (3.3213473s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-48vqt" [9e1a4e04-e541-4f52-ae10-d310e5978ea2] Pending
helpers_test.go:344: "headlamp-68456f997b-48vqt" [9e1a4e04-e541-4f52-ae10-d310e5978ea2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-48vqt" [9e1a4e04-e541-4f52-ae10-d310e5978ea2] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 31.0200082s
--- PASS: TestAddons/parallel/Headlamp (34.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-744zc" [1b8eccb8-bafe-4829-b10b-aea44d9c02be] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0125967s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-557700
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-557700: (2.7813046s)
--- PASS: TestAddons/parallel/CloudSpanner (7.80s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (104.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-557700 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-557700 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f1a7ba9e-b78d-4b36-8027-d6292834621f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f1a7ba9e-b78d-4b36-8027-d6292834621f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f1a7ba9e-b78d-4b36-8027-d6292834621f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 20.0164671s
addons_test.go:891: (dbg) Run:  kubectl --context addons-557700 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-557700 ssh "cat /opt/local-path-provisioner/pvc-d118effe-b5b7-4b85-a78a-9aaba561fe8c_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-557700 ssh "cat /opt/local-path-provisioner/pvc-d118effe-b5b7-4b85-a78a-9aaba561fe8c_default_test-pvc/file1": (1.1264025s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-557700 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-557700 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-557700 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-557700 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (45.2239542s)
--- PASS: TestAddons/parallel/LocalPath (104.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fhd52" [066543af-70ad-4346-b534-68636bd1af49] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0155259s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-557700
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-557700: (1.9786235s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (8.00s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-lxtkz" [e9683eab-958f-4b9a-8bd9-fed88a5c6258] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0219035s
--- PASS: TestAddons/parallel/Yakd (6.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-557700 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-557700 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-557700
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-557700: (12.686007s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-557700
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-557700
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-557700
--- PASS: TestAddons/StoppedEnableDisable (14.27s)

                                                
                                    
x
+
TestCertOptions (100.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-271100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E0513 23:41:53.253250   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-271100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m30.3027624s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-271100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-271100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.4208703s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-271100 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-271100 -- "sudo cat /etc/kubernetes/admin.conf": (1.5196831s)
helpers_test.go:175: Cleaning up "cert-options-271100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-271100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-271100: (7.2105029s)
--- PASS: TestCertOptions (100.71s)

                                                
                                    
x
+
TestCertExpiration (320.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-335400 --memory=2048 --cert-expiration=3m --driver=docker
E0513 23:38:36.970155   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-335400 --memory=2048 --cert-expiration=3m --driver=docker: (1m28.8102002s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-335400 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-335400 --memory=2048 --cert-expiration=8760h --driver=docker: (44.6080161s)
helpers_test.go:175: Cleaning up "cert-expiration-335400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-335400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-335400: (7.5285357s)
--- PASS: TestCertExpiration (320.95s)

                                                
                                    
x
+
TestDockerFlags (78.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-839800 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-839800 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m9.3426657s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-839800 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-839800 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.2859252s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-839800 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-839800 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.3652966s)
helpers_test.go:175: Cleaning up "docker-flags-839800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-839800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-839800: (6.2034311s)
--- PASS: TestDockerFlags (78.20s)

                                                
                                    
x
+
TestForceSystemdFlag (161.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-918600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-918600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m3.0982222s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-918600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-918600 ssh "docker info --format {{.CgroupDriver}}": (1.4630223s)
helpers_test.go:175: Cleaning up "force-systemd-flag-918600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-918600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-918600: (37.128381s)
--- PASS: TestForceSystemdFlag (161.69s)

                                                
                                    
x
+
TestForceSystemdEnv (96.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-228700 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-228700 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m28.2542562s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-228700 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-228700 ssh "docker info --format {{.CgroupDriver}}": (1.7522822s)
helpers_test.go:175: Cleaning up "force-systemd-env-228700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-228700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-228700: (6.5671059s)
--- PASS: TestForceSystemdEnv (96.57s)

                                                
                                    
x
+
TestErrorSpam/start (3.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 start --dry-run: (1.2299151s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 start --dry-run: (1.2521245s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 start --dry-run: (1.2599097s)
--- PASS: TestErrorSpam/start (3.75s)

                                                
                                    
x
+
TestErrorSpam/status (3.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 status: (1.2057574s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 status: (1.2167363s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 status: (1.2262784s)
--- PASS: TestErrorSpam/status (3.65s)

                                                
                                    
x
+
TestErrorSpam/pause (3.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 pause: (1.496915s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 pause: (1.1461504s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 pause: (1.1785991s)
--- PASS: TestErrorSpam/pause (3.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.24s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 unpause: (1.4662403s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 unpause: (1.9716809s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 unpause: (1.8052529s)
--- PASS: TestErrorSpam/unpause (5.24s)

                                                
                                    
x
+
TestErrorSpam/stop (20.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 stop: (11.1780172s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 stop: (4.5802193s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-065300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-065300 stop: (4.3716189s)
--- PASS: TestErrorSpam/stop (20.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\15868\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-950600 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0513 22:35:29.885073   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:29.900314   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:29.915640   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:29.948100   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:29.995758   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:30.090292   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:30.265831   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:30.597703   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:31.248140   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:32.538896   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:35.101367   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:40.226385   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:35:50.468873   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:36:10.950357   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-950600 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m24.3055119s)
--- PASS: TestFunctional/serial/StartWithProxy (84.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-950600 --alsologtostderr -v=8
E0513 22:36:51.912843   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-950600 --alsologtostderr -v=8: (41.7093479s)
functional_test.go:659: soft start took 41.7112068s for "functional-950600" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-950600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 cache add registry.k8s.io/pause:3.1: (2.3985606s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 cache add registry.k8s.io/pause:3.3: (2.1866698s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 cache add registry.k8s.io/pause:latest: (2.1947734s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-950600 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3097820546\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-950600 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3097820546\001: (2.0689383s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cache add minikube-local-cache-test:functional-950600
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 cache add minikube-local-cache-test:functional-950600: (1.6621136s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cache delete minikube-local-cache-test:functional-950600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-950600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh sudo crictl images: (1.1712734s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh sudo docker rmi registry.k8s.io/pause:latest: (1.1297823s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-950600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1.1588866s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:37:17.569755    6700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 cache reload: (1.7368749s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (1.125099s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (5.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 kubectl -- --context functional-950600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-950600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0513 22:38:13.849628   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-950600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.2718253s)
functional_test.go:757: restart took 52.2720061s for "functional-950600" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (52.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-950600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 logs: (2.590818s)
--- PASS: TestFunctional/serial/LogsCmd (2.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2928001606\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2928001606\001\logs.txt: (2.7660551s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-950600 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-950600
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-950600: exit status 115 (1.58514s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32435 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:38:30.220381   14256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-950600 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-950600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-950600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.2291599s)

                                                
                                                
-- stdout --
	* [functional-950600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:39:36.812299    4132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 22:39:36.909288    4132 out.go:291] Setting OutFile to fd 884 ...
	I0513 22:39:36.909288    4132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:39:36.909288    4132 out.go:304] Setting ErrFile to fd 764...
	I0513 22:39:36.909288    4132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:39:36.937304    4132 out.go:298] Setting JSON to false
	I0513 22:39:36.940295    4132 start.go:129] hostinfo: {"hostname":"minikube4","uptime":6215,"bootTime":1715633761,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 22:39:36.940295    4132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:39:36.944402    4132 out.go:177] * [functional-950600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:39:36.949308    4132 notify.go:220] Checking for updates...
	I0513 22:39:36.949308    4132 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 22:39:36.952316    4132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:39:36.954388    4132 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 22:39:36.957300    4132 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:39:36.959297    4132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:39:36.963296    4132 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:39:36.964729    4132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:39:37.324330    4132 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 22:39:37.337328    4132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 22:39:37.767535    4132 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:87 SystemTime:2024-05-13 22:39:37.715036878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 22:39:37.784512    4132 out.go:177] * Using the docker driver based on existing profile
	I0513 22:39:37.786511    4132 start.go:297] selected driver: docker
	I0513 22:39:37.787529    4132 start.go:901] validating driver "docker" against &{Name:functional-950600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-950600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:39:37.787529    4132 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 22:39:37.866507    4132 out.go:177] 
	W0513 22:39:37.869508    4132 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0513 22:39:37.872526    4132 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-950600 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:987: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-950600 --dry-run --alsologtostderr -v=1 --driver=docker: (1.6666998s)
--- PASS: TestFunctional/parallel/DryRun (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-950600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-950600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.1035773s)

                                                
                                                
-- stdout --
	* [functional-950600] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:39:39.717324    8816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 22:39:39.812294    8816 out.go:291] Setting OutFile to fd 648 ...
	I0513 22:39:39.813297    8816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:39:39.813297    8816 out.go:304] Setting ErrFile to fd 792...
	I0513 22:39:39.813297    8816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:39:39.838310    8816 out.go:298] Setting JSON to false
	I0513 22:39:39.842310    8816 start.go:129] hostinfo: {"hostname":"minikube4","uptime":6218,"bootTime":1715633761,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0513 22:39:39.842310    8816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:39:39.845306    8816 out.go:177] * [functional-950600] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:39:39.849290    8816 notify.go:220] Checking for updates...
	I0513 22:39:39.852324    8816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0513 22:39:39.854300    8816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:39:39.856301    8816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0513 22:39:39.859307    8816 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:39:39.866339    8816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:39:39.870297    8816 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:39:39.871300    8816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:39:40.213341    8816 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0513 22:39:40.227317    8816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0513 22:39:40.594057    8816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:87 SystemTime:2024-05-13 22:39:40.547892425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0513 22:39:40.598136    8816 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0513 22:39:40.600475    8816 start.go:297] selected driver: docker
	I0513 22:39:40.600546    8816 start.go:901] validating driver "docker" against &{Name:functional-950600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-950600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:39:40.600791    8816 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 22:39:40.649069    8816 out.go:177] 
	W0513 22:39:40.651066    8816 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0513 22:39:40.653069    8816 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 status: (1.598222s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.300553s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 status -o json: (1.3891941s)
--- PASS: TestFunctional/parallel/StatusCmd (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (104.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [197b8324-4017-4bce-9f03-e06f5620dfe4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0151835s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-950600 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-950600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-950600 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-950600 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-950600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [93c6e995-f3d9-4e71-87dd-a805c3d984d6] Pending
helpers_test.go:344: "sp-pod" [93c6e995-f3d9-4e71-87dd-a805c3d984d6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [93c6e995-f3d9-4e71-87dd-a805c3d984d6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 37.0220248s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-950600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-950600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-950600 delete -f testdata/storage-provisioner/pod.yaml: (1.8666098s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-950600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d833a5ae-7f10-43f4-95f1-18e059860125] Pending
helpers_test.go:344: "sp-pod" [d833a5ae-7f10-43f4-95f1-18e059860125] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d833a5ae-7f10-43f4-95f1-18e059860125] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 55.0110594s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-950600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (104.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "echo hello": (1.8337562s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "cat /etc/hostname": (1.4013819s)
--- PASS: TestFunctional/parallel/SSHCmd (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.0931142s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh -n functional-950600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh -n functional-950600 "sudo cat /home/docker/cp-test.txt": (1.239937s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cp functional-950600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd2472608745\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 cp functional-950600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd2472608745\001\cp-test.txt: (1.1663001s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh -n functional-950600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh -n functional-950600 "sudo cat /home/docker/cp-test.txt": (1.4953182s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (1.0203863s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh -n functional-950600 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh -n functional-950600 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.7408109s)
--- PASS: TestFunctional/parallel/CpCmd (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (73.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-950600 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-27l6t" [38fc58d4-8a72-4af7-b8e3-24864eea8114] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-27l6t" [38fc58d4-8a72-4af7-b8e3-24864eea8114] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 59.0167017s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;": exit status 1 (321.7964ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;": exit status 1 (264.5481ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0513 22:40:29.902930   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
functional_test.go:1803: (dbg) Run:  kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;": exit status 1 (289.7366ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;": exit status 1 (323.5794ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;": exit status 1 (287.0446ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-950600 exec mysql-64454c8b5c-27l6t -- mysql -ppassword -e "show databases;"
E0513 22:40:57.698393   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (73.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15868/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/test/nested/copy/15868/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/test/nested/copy/15868/hosts": (1.2471961s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15868.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/ssl/certs/15868.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/ssl/certs/15868.pem": (1.2105346s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15868.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /usr/share/ca-certificates/15868.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /usr/share/ca-certificates/15868.pem": (1.2289269s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.1883091s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/158682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/ssl/certs/158682.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/ssl/certs/158682.pem": (1.5486171s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/158682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /usr/share/ca-certificates/158682.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /usr/share/ca-certificates/158682.pem": (1.2754266s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.6936006s)
--- PASS: TestFunctional/parallel/CertSync (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-950600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-950600 ssh "sudo systemctl is-active crio": exit status 1 (1.2307219s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:38:34.171453    5368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.5173619s)
--- PASS: TestFunctional/parallel/License (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 version -o=json --components: (2.3371937s)
--- PASS: TestFunctional/parallel/Version/components (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-950600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-950600
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-950600
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-950600 image ls --format short --alsologtostderr:
W0513 22:39:58.037871    3880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:39:58.123593    3880 out.go:291] Setting OutFile to fd 1000 ...
I0513 22:39:58.124584    3880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:39:58.124584    3880 out.go:304] Setting ErrFile to fd 940...
I0513 22:39:58.124584    3880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:39:58.138439    3880 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:39:58.139034    3880 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:39:58.156478    3880 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
I0513 22:39:58.354155    3880 ssh_runner.go:195] Run: systemctl --version
I0513 22:39:58.371688    3880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
I0513 22:39:58.524670    3880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
I0513 22:39:58.680541    3880 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-950600 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/localhost/my-image                | functional-950600 | 017c8065c9705 | 1.24MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/google-containers/addon-resizer      | functional-950600 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-950600 | 4d28179d5497a | 30B    |
| docker.io/library/nginx                     | latest            | 1d668e06f1e53 | 188MB  |
| docker.io/library/nginx                     | alpine            | 501d84f5d0648 | 48.3MB |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-950600 image ls --format table --alsologtostderr:
W0513 22:40:11.490977   10264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:40:11.572160   10264 out.go:291] Setting OutFile to fd 792 ...
I0513 22:40:11.572873   10264 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:40:11.572873   10264 out.go:304] Setting ErrFile to fd 844...
I0513 22:40:11.572955   10264 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:40:11.587403   10264 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:40:11.588260   10264 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:40:11.609166   10264 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
I0513 22:40:11.796298   10264 ssh_runner.go:195] Run: systemctl --version
I0513 22:40:11.804196   10264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
I0513 22:40:11.984195   10264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
I0513 22:40:12.113645   10264 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-950600 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"017c8065c97053db6fad55fc2bb5effb73b41435a2d741662e719f4e31f5b317","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-950600"],"size":"1240000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","r
epoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-950600"],"size":"32900000"},{"id":"501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"1d668e06f1e534ab338404ba891c37d618dd53c9073dcdd4ebde82aa7643f83f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"4d28179d5497ada5e5f1df310f467bed9a3d1552ca9148dbd9f59f14967ebb15","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-950600"],"size":"30"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTa
gs":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-950600 image ls --format json --alsologtostderr:
W0513 22:40:10.723924   11788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:40:10.799925   11788 out.go:291] Setting OutFile to fd 708 ...
I0513 22:40:10.800917   11788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:40:10.800917   11788 out.go:304] Setting ErrFile to fd 692...
I0513 22:40:10.800917   11788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:40:10.813928   11788 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:40:10.813928   11788 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:40:10.834921   11788 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
I0513 22:40:10.998279   11788 ssh_runner.go:195] Run: systemctl --version
I0513 22:40:11.006217   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
I0513 22:40:11.173454   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
I0513 22:40:11.304917   11788 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-950600 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 1d668e06f1e534ab338404ba891c37d618dd53c9073dcdd4ebde82aa7643f83f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-950600
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 4d28179d5497ada5e5f1df310f467bed9a3d1552ca9148dbd9f59f14967ebb15
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-950600
size: "30"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-950600 image ls --format yaml --alsologtostderr:
W0513 22:39:58.892336    1652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:39:58.972330    1652 out.go:291] Setting OutFile to fd 760 ...
I0513 22:39:58.972330    1652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:39:58.972330    1652 out.go:304] Setting ErrFile to fd 1004...
I0513 22:39:58.972330    1652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:39:58.988348    1652 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:39:58.988348    1652 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:39:59.007394    1652 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
I0513 22:39:59.173986    1652 ssh_runner.go:195] Run: systemctl --version
I0513 22:39:59.182989    1652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
I0513 22:39:59.369890    1652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
I0513 22:39:59.512968    1652 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (11.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-950600 ssh pgrep buildkitd: exit status 1 (1.1419386s)

                                                
                                                
** stderr ** 
	W0513 22:39:59.713213   15000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image build -t localhost/my-image:functional-950600 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 image build -t localhost/my-image:functional-950600 testdata\build --alsologtostderr: (9.0888522s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-950600 image build -t localhost/my-image:functional-950600 testdata\build --alsologtostderr:
W0513 22:40:00.856212   10524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:40:00.941462   10524 out.go:291] Setting OutFile to fd 760 ...
I0513 22:40:00.956174   10524 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:40:00.956174   10524 out.go:304] Setting ErrFile to fd 1004...
I0513 22:40:00.956174   10524 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:40:00.980624   10524 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:40:00.999623   10524 config.go:182] Loaded profile config "functional-950600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:40:01.016629   10524 cli_runner.go:164] Run: docker container inspect functional-950600 --format={{.State.Status}}
I0513 22:40:01.205102   10524 ssh_runner.go:195] Run: systemctl --version
I0513 22:40:01.214090   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-950600
I0513 22:40:01.373931   10524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52764 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-950600\id_rsa Username:docker}
I0513 22:40:01.499931   10524 build_images.go:161] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.4201498803.tar
I0513 22:40:01.511626   10524 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0513 22:40:01.541625   10524 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4201498803.tar
I0513 22:40:01.566647   10524 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4201498803.tar: stat -c "%s %y" /var/lib/minikube/build/build.4201498803.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4201498803.tar': No such file or directory
I0513 22:40:01.567249   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.4201498803.tar --> /var/lib/minikube/build/build.4201498803.tar (3072 bytes)
I0513 22:40:01.616713   10524 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4201498803
I0513 22:40:01.644720   10524 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4201498803 -xf /var/lib/minikube/build/build.4201498803.tar
I0513 22:40:01.658714   10524 docker.go:360] Building image: /var/lib/minikube/build/build.4201498803
I0513 22:40:01.667732   10524 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-950600 /var/lib/minikube/build/build.4201498803
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 5.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:017c8065c97053db6fad55fc2bb5effb73b41435a2d741662e719f4e31f5b317
#8 writing image sha256:017c8065c97053db6fad55fc2bb5effb73b41435a2d741662e719f4e31f5b317 done
#8 naming to localhost/my-image:functional-950600 0.0s done
#8 DONE 0.2s
I0513 22:40:09.735589   10524 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-950600 /var/lib/minikube/build/build.4201498803: (8.0673781s)
I0513 22:40:09.752581   10524 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4201498803
I0513 22:40:09.785923   10524 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4201498803.tar
I0513 22:40:09.812089   10524 build_images.go:217] Built localhost/my-image:functional-950600 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.4201498803.tar
I0513 22:40:09.812089   10524 build_images.go:133] succeeded building to: functional-950600
I0513 22:40:09.812089   10524 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (11.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.3258148s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-950600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-950600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-950600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-5nsb8" [8463d5d3-f463-4306-8c17-11af51d517e0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-5nsb8" [8463d5d3-f463-4306-8c17-11af51d517e0] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.0287478s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (13.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image load --daemon gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 image load --daemon gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr: (12.6874423s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 image ls: (1.0775744s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (13.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-950600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-950600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-950600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2000: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 2196: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-950600 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-950600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-950600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b126005d-57d0-44f9-a7f3-7a93512f312a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b126005d-57d0-44f9-a7f3-7a93512f312a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.032656s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image load --daemon gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 image load --daemon gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr: (5.9700025s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 service list: (1.4872046s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 service list -o json: (1.6709148s)
functional_test.go:1490: Took "1.6709148s" to run "out/minikube-windows-amd64.exe -p functional-950600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (17.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.4338771s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-950600
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image load --daemon gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 image load --daemon gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr: (11.6731729s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (17.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-950600 service --namespace=default --https --url hello-node: exit status 1 (15.0276698s)

                                                
                                                
-- stdout --
	https://127.0.0.1:53050

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:39:02.135622    2560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:53050
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-950600 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-950600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3600: TerminateProcess: Access is denied.
helpers_test.go:508: unable to kill pid 9120: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-950600 service hello-node --url --format={{.IP}}: exit status 1 (15.028211s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:39:17.124796    5680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image save gcr.io/google-containers/addon-resizer:functional-950600 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 image save gcr.io/google-containers/addon-resizer:functional-950600 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (5.5580315s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5389358s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image rm gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.328029s)
functional_test.go:1311: Took "1.3280826s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "236.8847ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (4.8480849s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.4589889s)
functional_test.go:1362: Took "1.4595204s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "223.8139ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (5.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-950600
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 image save --daemon gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-950600 image save --daemon gcr.io/google-containers/addon-resizer:functional-950600 --alsologtostderr: (5.3716967s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-950600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (5.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-950600 service hello-node --url: exit status 1 (15.0097639s)

                                                
                                                
-- stdout --
	http://127.0.0.1:53122

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:39:32.194399    5040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:53122
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-950600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-950600"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-950600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-950600": (4.7809747s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-950600 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-950600 docker-env | Invoke-Expression ; docker images": (3.5059881s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-950600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.44s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-950600
--- PASS: TestFunctional/delete_addon-resizer_images (0.44s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-950600
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-950600
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (229.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-770100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
E0513 22:45:29.910072   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-770100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (3m45.9560724s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr
E0513 22:48:36.834022   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:48:36.849425   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:48:36.865357   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:48:36.897097   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:48:36.944242   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:48:37.037905   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:48:37.210133   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:48:37.534851   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr: (3.363725s)
--- PASS: TestMultiControlPlane/serial/StartCluster (229.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E0513 22:48:38.176664   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- rollout status deployment/busybox
E0513 22:48:39.457373   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:48:42.021668   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-770100 -- rollout status deployment/busybox: (3.6837623s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-7jzjh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-7jzjh -- nslookup kubernetes.io: (1.8978284s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-tmlg4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-tmlg4 -- nslookup kubernetes.io: (1.5374133s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-xmchw -- nslookup kubernetes.io
E0513 22:48:47.153614   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-xmchw -- nslookup kubernetes.io: (1.5215991s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-7jzjh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-tmlg4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-xmchw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-7jzjh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-tmlg4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-xmchw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-7jzjh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-7jzjh -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-tmlg4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-tmlg4 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-xmchw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-770100 -- exec busybox-fc5497c4f-xmchw -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-770100 -v=7 --alsologtostderr
E0513 22:48:57.406250   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 22:49:17.901131   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-770100 -v=7 --alsologtostderr: (53.069952s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr: (4.3084624s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-770100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (3.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.4016793s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (3.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (69.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status --output json -v=7 --alsologtostderr
E0513 22:49:58.878056   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 status --output json -v=7 --alsologtostderr: (4.0538327s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp testdata\cp-test.txt ha-770100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp testdata\cp-test.txt ha-770100:/home/docker/cp-test.txt: (1.1445029s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt": (1.0925014s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2683290428\001\cp-test_ha-770100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2683290428\001\cp-test_ha-770100.txt: (1.1305359s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt": (1.1324008s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100:/home/docker/cp-test.txt ha-770100-m02:/home/docker/cp-test_ha-770100_ha-770100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100:/home/docker/cp-test.txt ha-770100-m02:/home/docker/cp-test_ha-770100_ha-770100-m02.txt: (1.6578175s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt": (1.0832656s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test_ha-770100_ha-770100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test_ha-770100_ha-770100-m02.txt": (1.1110284s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100:/home/docker/cp-test.txt ha-770100-m03:/home/docker/cp-test_ha-770100_ha-770100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100:/home/docker/cp-test.txt ha-770100-m03:/home/docker/cp-test_ha-770100_ha-770100-m03.txt: (1.6854178s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt": (1.0960706s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test_ha-770100_ha-770100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test_ha-770100_ha-770100-m03.txt": (1.1474388s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100:/home/docker/cp-test.txt ha-770100-m04:/home/docker/cp-test_ha-770100_ha-770100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100:/home/docker/cp-test.txt ha-770100-m04:/home/docker/cp-test_ha-770100_ha-770100-m04.txt: (1.6586776s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test.txt": (1.135336s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test_ha-770100_ha-770100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test_ha-770100_ha-770100-m04.txt": (1.0751083s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp testdata\cp-test.txt ha-770100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp testdata\cp-test.txt ha-770100-m02:/home/docker/cp-test.txt: (1.1344539s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt": (1.1092551s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2683290428\001\cp-test_ha-770100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2683290428\001\cp-test_ha-770100-m02.txt: (1.118437s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt": (1.1101182s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m02:/home/docker/cp-test.txt ha-770100:/home/docker/cp-test_ha-770100-m02_ha-770100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m02:/home/docker/cp-test.txt ha-770100:/home/docker/cp-test_ha-770100-m02_ha-770100.txt: (1.6307174s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt": (1.1433254s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test_ha-770100-m02_ha-770100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test_ha-770100-m02_ha-770100.txt": (1.1113408s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m02:/home/docker/cp-test.txt ha-770100-m03:/home/docker/cp-test_ha-770100-m02_ha-770100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m02:/home/docker/cp-test.txt ha-770100-m03:/home/docker/cp-test_ha-770100-m02_ha-770100-m03.txt: (1.6609087s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt": (1.0910016s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test_ha-770100-m02_ha-770100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test_ha-770100-m02_ha-770100-m03.txt": (1.1627035s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m02:/home/docker/cp-test.txt ha-770100-m04:/home/docker/cp-test_ha-770100-m02_ha-770100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m02:/home/docker/cp-test.txt ha-770100-m04:/home/docker/cp-test_ha-770100-m02_ha-770100-m04.txt: (1.6985374s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt"
E0513 22:50:29.931967   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test.txt": (1.1427751s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test_ha-770100-m02_ha-770100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test_ha-770100-m02_ha-770100-m04.txt": (1.1417379s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp testdata\cp-test.txt ha-770100-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp testdata\cp-test.txt ha-770100-m03:/home/docker/cp-test.txt: (1.1504192s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt": (1.1068639s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2683290428\001\cp-test_ha-770100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2683290428\001\cp-test_ha-770100-m03.txt: (1.144975s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt": (1.1298783s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m03:/home/docker/cp-test.txt ha-770100:/home/docker/cp-test_ha-770100-m03_ha-770100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m03:/home/docker/cp-test.txt ha-770100:/home/docker/cp-test_ha-770100-m03_ha-770100.txt: (1.6721921s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt": (1.1374738s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test_ha-770100-m03_ha-770100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test_ha-770100-m03_ha-770100.txt": (1.1313815s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m03:/home/docker/cp-test.txt ha-770100-m02:/home/docker/cp-test_ha-770100-m03_ha-770100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m03:/home/docker/cp-test.txt ha-770100-m02:/home/docker/cp-test_ha-770100-m03_ha-770100-m02.txt: (1.6250145s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt": (1.1352321s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test_ha-770100-m03_ha-770100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test_ha-770100-m03_ha-770100-m02.txt": (1.134823s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m03:/home/docker/cp-test.txt ha-770100-m04:/home/docker/cp-test_ha-770100-m03_ha-770100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m03:/home/docker/cp-test.txt ha-770100-m04:/home/docker/cp-test_ha-770100-m03_ha-770100-m04.txt: (1.6100198s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test.txt": (1.1062812s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test_ha-770100-m03_ha-770100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test_ha-770100-m03_ha-770100-m04.txt": (1.109404s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp testdata\cp-test.txt ha-770100-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp testdata\cp-test.txt ha-770100-m04:/home/docker/cp-test.txt: (1.1392603s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt": (1.1318868s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2683290428\001\cp-test_ha-770100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2683290428\001\cp-test_ha-770100-m04.txt: (1.12302s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt": (1.1149158s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m04:/home/docker/cp-test.txt ha-770100:/home/docker/cp-test_ha-770100-m04_ha-770100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m04:/home/docker/cp-test.txt ha-770100:/home/docker/cp-test_ha-770100-m04_ha-770100.txt: (1.6908252s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt": (1.1366004s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test_ha-770100-m04_ha-770100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100 "sudo cat /home/docker/cp-test_ha-770100-m04_ha-770100.txt": (1.1295984s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m04:/home/docker/cp-test.txt ha-770100-m02:/home/docker/cp-test_ha-770100-m04_ha-770100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m04:/home/docker/cp-test.txt ha-770100-m02:/home/docker/cp-test_ha-770100-m04_ha-770100-m02.txt: (1.6853348s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt": (1.1287047s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test_ha-770100-m04_ha-770100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m02 "sudo cat /home/docker/cp-test_ha-770100-m04_ha-770100-m02.txt": (1.142237s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m04:/home/docker/cp-test.txt ha-770100-m03:/home/docker/cp-test_ha-770100-m04_ha-770100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 cp ha-770100-m04:/home/docker/cp-test.txt ha-770100-m03:/home/docker/cp-test_ha-770100-m04_ha-770100-m03.txt: (1.6513632s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m04 "sudo cat /home/docker/cp-test.txt": (1.1491318s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test_ha-770100-m04_ha-770100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 ssh -n ha-770100-m03 "sudo cat /home/docker/cp-test_ha-770100-m04_ha-770100-m03.txt": (1.1382724s)
--- PASS: TestMultiControlPlane/serial/CopyFile (69.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (15.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 node stop m02 -v=7 --alsologtostderr: (12.1803133s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr: exit status 7 (3.245874s)

                                                
                                                
-- stdout --
	ha-770100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-770100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770100-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-770100-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:51:16.820009   10460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 22:51:16.904154   10460 out.go:291] Setting OutFile to fd 692 ...
	I0513 22:51:16.905195   10460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:51:16.905195   10460 out.go:304] Setting ErrFile to fd 1016...
	I0513 22:51:16.905195   10460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:51:16.921185   10460 out.go:298] Setting JSON to false
	I0513 22:51:16.922221   10460 mustload.go:65] Loading cluster: ha-770100
	I0513 22:51:16.922221   10460 notify.go:220] Checking for updates...
	I0513 22:51:16.922523   10460 config.go:182] Loaded profile config "ha-770100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:51:16.922523   10460 status.go:255] checking status of ha-770100 ...
	I0513 22:51:16.942687   10460 cli_runner.go:164] Run: docker container inspect ha-770100 --format={{.State.Status}}
	I0513 22:51:17.114008   10460 status.go:330] ha-770100 host status = "Running" (err=<nil>)
	I0513 22:51:17.114008   10460 host.go:66] Checking if "ha-770100" exists ...
	I0513 22:51:17.126849   10460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770100
	I0513 22:51:17.301724   10460 host.go:66] Checking if "ha-770100" exists ...
	I0513 22:51:17.313961   10460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 22:51:17.321958   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770100
	I0513 22:51:17.485298   10460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53206 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-770100\id_rsa Username:docker}
	I0513 22:51:17.700189   10460 ssh_runner.go:195] Run: systemctl --version
	I0513 22:51:17.729867   10460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 22:51:17.762320   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-770100
	I0513 22:51:17.927398   10460 kubeconfig.go:125] found "ha-770100" server: "https://127.0.0.1:53205"
	I0513 22:51:17.927524   10460 api_server.go:166] Checking apiserver status ...
	I0513 22:51:17.939515   10460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:51:17.973292   10460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2477/cgroup
	I0513 22:51:17.993688   10460 api_server.go:182] apiserver freezer: "7:freezer:/docker/daeb045af376ea925bf2db38e4cc60f74b7b93bbfd47844bc8d0f03d3efe5a47/kubepods/burstable/podde2aefba330c604dc9db6e1001719826/fce01c79a1fe8b92c25665d7f0c320dc6ac579fc7c17f3784b568e20509465b1"
	I0513 22:51:18.006936   10460 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/daeb045af376ea925bf2db38e4cc60f74b7b93bbfd47844bc8d0f03d3efe5a47/kubepods/burstable/podde2aefba330c604dc9db6e1001719826/fce01c79a1fe8b92c25665d7f0c320dc6ac579fc7c17f3784b568e20509465b1/freezer.state
	I0513 22:51:18.025926   10460 api_server.go:204] freezer state: "THAWED"
	I0513 22:51:18.025926   10460 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53205/healthz ...
	I0513 22:51:18.036928   10460 api_server.go:279] https://127.0.0.1:53205/healthz returned 200:
	ok
	I0513 22:51:18.036928   10460 status.go:422] ha-770100 apiserver status = Running (err=<nil>)
	I0513 22:51:18.036928   10460 status.go:257] ha-770100 status: &{Name:ha-770100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 22:51:18.036928   10460 status.go:255] checking status of ha-770100-m02 ...
	I0513 22:51:18.053961   10460 cli_runner.go:164] Run: docker container inspect ha-770100-m02 --format={{.State.Status}}
	I0513 22:51:18.220996   10460 status.go:330] ha-770100-m02 host status = "Stopped" (err=<nil>)
	I0513 22:51:18.221281   10460 status.go:343] host is not running, skipping remaining checks
	I0513 22:51:18.221345   10460 status.go:257] ha-770100-m02 status: &{Name:ha-770100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 22:51:18.221466   10460 status.go:255] checking status of ha-770100-m03 ...
	I0513 22:51:18.241206   10460 cli_runner.go:164] Run: docker container inspect ha-770100-m03 --format={{.State.Status}}
	I0513 22:51:18.407880   10460 status.go:330] ha-770100-m03 host status = "Running" (err=<nil>)
	I0513 22:51:18.408425   10460 host.go:66] Checking if "ha-770100-m03" exists ...
	I0513 22:51:18.421127   10460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770100-m03
	I0513 22:51:18.577742   10460 host.go:66] Checking if "ha-770100-m03" exists ...
	I0513 22:51:18.592178   10460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 22:51:18.604039   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770100-m03
	I0513 22:51:18.763620   10460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53320 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-770100-m03\id_rsa Username:docker}
	I0513 22:51:18.896594   10460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 22:51:18.925417   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-770100
	I0513 22:51:19.094835   10460 kubeconfig.go:125] found "ha-770100" server: "https://127.0.0.1:53205"
	I0513 22:51:19.094924   10460 api_server.go:166] Checking apiserver status ...
	I0513 22:51:19.110139   10460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:51:19.144615   10460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2444/cgroup
	I0513 22:51:19.164030   10460 api_server.go:182] apiserver freezer: "7:freezer:/docker/9fc15a43624207782725120403536553b86395e29feab83141575148d41bccc4/kubepods/burstable/podb1f180dde8423561e27fbdccea547d51/4c831146d2b39eaeead4527ed7f67e58ad0e4baf40303d0e1ae2c9bfca7a70e0"
	I0513 22:51:19.174973   10460 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9fc15a43624207782725120403536553b86395e29feab83141575148d41bccc4/kubepods/burstable/podb1f180dde8423561e27fbdccea547d51/4c831146d2b39eaeead4527ed7f67e58ad0e4baf40303d0e1ae2c9bfca7a70e0/freezer.state
	I0513 22:51:19.193518   10460 api_server.go:204] freezer state: "THAWED"
	I0513 22:51:19.193606   10460 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53205/healthz ...
	I0513 22:51:19.204050   10460 api_server.go:279] https://127.0.0.1:53205/healthz returned 200:
	ok
	I0513 22:51:19.204140   10460 status.go:422] ha-770100-m03 apiserver status = Running (err=<nil>)
	I0513 22:51:19.204339   10460 status.go:257] ha-770100-m03 status: &{Name:ha-770100-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 22:51:19.204367   10460 status.go:255] checking status of ha-770100-m04 ...
	I0513 22:51:19.223841   10460 cli_runner.go:164] Run: docker container inspect ha-770100-m04 --format={{.State.Status}}
	I0513 22:51:19.390059   10460 status.go:330] ha-770100-m04 host status = "Running" (err=<nil>)
	I0513 22:51:19.390059   10460 host.go:66] Checking if "ha-770100-m04" exists ...
	I0513 22:51:19.399907   10460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770100-m04
	I0513 22:51:19.559928   10460 host.go:66] Checking if "ha-770100-m04" exists ...
	I0513 22:51:19.571931   10460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 22:51:19.578930   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770100-m04
	I0513 22:51:19.749897   10460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53453 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-770100-m04\id_rsa Username:docker}
	I0513 22:51:19.896234   10460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 22:51:19.930195   10460 status.go:257] ha-770100-m04 status: &{Name:ha-770100-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (15.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0513 22:51:20.807537   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.4879866s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (149.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 node start m02 -v=7 --alsologtostderr
E0513 22:51:53.098796   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 22:53:36.841468   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 node start m02 -v=7 --alsologtostderr: (2m25.1092999s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr: (4.0271157s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (149.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.3899287s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-770100 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-770100 -v=7 --alsologtostderr
E0513 22:54:04.663359   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-770100 -v=7 --alsologtostderr: (39.2780656s)
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-770100 --wait=true -v=7 --alsologtostderr
E0513 22:55:29.932598   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
ha_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-770100 --wait=true -v=7 --alsologtostderr: (3m21.6424387s)
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-770100
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (22.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 node delete m03 -v=7 --alsologtostderr: (19.196832s)
ha_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr: (3.1859605s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (22.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.5119675s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 stop -v=7 --alsologtostderr
E0513 22:58:36.863991   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
ha_test.go:531: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 stop -v=7 --alsologtostderr: (36.625983s)
ha_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr: exit status 7 (750.2252ms)

                                                
                                                
-- stdout --
	ha-770100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770100-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:58:58.669330    3132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 22:58:58.754414    3132 out.go:291] Setting OutFile to fd 536 ...
	I0513 22:58:58.756048    3132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:58:58.756141    3132 out.go:304] Setting ErrFile to fd 936...
	I0513 22:58:58.756141    3132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:58:58.769986    3132 out.go:298] Setting JSON to false
	I0513 22:58:58.769986    3132 mustload.go:65] Loading cluster: ha-770100
	I0513 22:58:58.769986    3132 notify.go:220] Checking for updates...
	I0513 22:58:58.771451    3132 config.go:182] Loaded profile config "ha-770100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:58:58.771513    3132 status.go:255] checking status of ha-770100 ...
	I0513 22:58:58.790993    3132 cli_runner.go:164] Run: docker container inspect ha-770100 --format={{.State.Status}}
	I0513 22:58:58.960084    3132 status.go:330] ha-770100 host status = "Stopped" (err=<nil>)
	I0513 22:58:58.960613    3132 status.go:343] host is not running, skipping remaining checks
	I0513 22:58:58.960613    3132 status.go:257] ha-770100 status: &{Name:ha-770100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 22:58:58.960613    3132 status.go:255] checking status of ha-770100-m02 ...
	I0513 22:58:58.978253    3132 cli_runner.go:164] Run: docker container inspect ha-770100-m02 --format={{.State.Status}}
	I0513 22:58:59.129865    3132 status.go:330] ha-770100-m02 host status = "Stopped" (err=<nil>)
	I0513 22:58:59.130783    3132 status.go:343] host is not running, skipping remaining checks
	I0513 22:58:59.130783    3132 status.go:257] ha-770100-m02 status: &{Name:ha-770100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 22:58:59.130783    3132 status.go:255] checking status of ha-770100-m04 ...
	I0513 22:58:59.148269    3132 cli_runner.go:164] Run: docker container inspect ha-770100-m04 --format={{.State.Status}}
	I0513 22:58:59.301249    3132 status.go:330] ha-770100-m04 host status = "Stopped" (err=<nil>)
	I0513 22:58:59.301272    3132 status.go:343] host is not running, skipping remaining checks
	I0513 22:58:59.301272    3132 status.go:257] ha-770100-m04 status: &{Name:ha-770100-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (110.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-770100 --wait=true -v=7 --alsologtostderr --driver=docker
E0513 23:00:29.945144   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
ha_test.go:560: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-770100 --wait=true -v=7 --alsologtostderr --driver=docker: (1m46.7983977s)
ha_test.go:566: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr: (2.9828978s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (110.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.2538652s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-770100 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-770100 --control-plane -v=7 --alsologtostderr: (1m12.533576s)
ha_test.go:611: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-windows-amd64.exe -p ha-770100 status -v=7 --alsologtostderr: (4.0676555s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.3551305s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.36s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (66.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-020700 --driver=docker
E0513 23:03:36.866631   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-020700 --driver=docker: (1m6.6536692s)
--- PASS: TestImageBuild/serial/Setup (66.65s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-020700
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-020700: (4.1288925s)
--- PASS: TestImageBuild/serial/NormalBuild (4.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-020700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-020700: (2.8086979s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-020700
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-020700: (1.9679036s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-020700
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-020700: (2.4344837s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.44s)

                                                
                                    
x
+
TestJSONOutput/start/Command (110.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-866100 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0513 23:05:00.054580   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 23:05:29.963572   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-866100 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m50.1361526s)
--- PASS: TestJSONOutput/start/Command (110.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-866100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-866100 --output=json --user=testUser: (1.6274694s)
--- PASS: TestJSONOutput/pause/Command (1.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-866100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-866100 --output=json --user=testUser: (1.4321938s)
--- PASS: TestJSONOutput/unpause/Command (1.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-866100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-866100 --output=json --user=testUser: (7.6803371s)
--- PASS: TestJSONOutput/stop/Command (7.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.38s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-581500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-581500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (245.4699ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bad50e8c-0d36-4c50-ac93-d5ab6ae974fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-581500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9546bf80-502c-4041-bac5-e19f4b4bf2c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"879e3b25-49b3-4e5a-aca1-88c214939100","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"30da4520-731c-409d-9d1e-8f810612f9e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"e40304d4-90d1-4495-8c8c-8b631d5563e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18872"}}
	{"specversion":"1.0","id":"ecc6d9ba-7a3c-4098-9576-6186860b3734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b63580c1-ebc7-4e78-845e-3e5a4aa9d69f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:05:59.806025   11288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-581500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-581500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-581500: (1.1241275s)
--- PASS: TestErrorJSONOutput (1.38s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (74.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-507700 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-507700 --network=: (1m9.263784s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-507700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-507700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-507700: (5.2718133s)
--- PASS: TestKicCustomNetwork/create_custom_network (74.73s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (73.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-801600 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-801600 --network=bridge: (1m8.5365355s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-801600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-801600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-801600: (4.738167s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (73.47s)

                                                
                                    
x
+
TestKicExistingNetwork (75.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-525000 --network=existing-network
E0513 23:08:33.150544   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
E0513 23:08:36.880467   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-525000 --network=existing-network: (1m9.367008s)
helpers_test.go:175: Cleaning up "existing-network-525000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-525000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-525000: (4.5804067s)
--- PASS: TestKicExistingNetwork (75.29s)

                                                
                                    
x
+
TestKicCustomSubnet (77.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-234300 --subnet=192.168.60.0/24
E0513 23:10:29.970545   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-234300 --subnet=192.168.60.0/24: (1m12.2157963s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-234300 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-234300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-234300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-234300: (4.9943435s)
--- PASS: TestKicCustomSubnet (77.39s)

                                                
                                    
x
+
TestKicStaticIP (75.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-027400 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-027400 --static-ip=192.168.200.200: (1m9.9619977s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-027400 ip
helpers_test.go:175: Cleaning up "static-ip-027400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-027400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-027400: (4.8779658s)
--- PASS: TestKicStaticIP (75.46s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (140.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-956800 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-956800 --driver=docker: (1m7.4018844s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-956800 --driver=docker
E0513 23:13:36.895560   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-956800 --driver=docker: (56.9952389s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-956800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.367448s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-956800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.3429697s)
helpers_test.go:175: Cleaning up "second-956800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-956800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-956800: (5.3542358s)
helpers_test.go:175: Cleaning up "first-956800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-956800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-956800: (5.1910137s)
--- PASS: TestMinikubeProfile (140.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-728300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-728300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (18.41649s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.1s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-728300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-728300 ssh -- ls /minikube-host: (1.0975937s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-728300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-728300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (18.3117258s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.07s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-728300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-728300 ssh -- ls /minikube-host: (1.0730454s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.07s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (3.84s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-728300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-728300 --alsologtostderr -v=5: (3.84363s)
--- PASS: TestMountStart/serial/DeleteFirst (3.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.08s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-728300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-728300 ssh -- ls /minikube-host: (1.0842598s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.08s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.46s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-728300
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-728300: (2.4600084s)
--- PASS: TestMountStart/serial/Stop (2.46s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (13.24s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-728300
E0513 23:15:29.985936   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-728300: (12.2231825s)
--- PASS: TestMountStart/serial/RestartStopped (13.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.1s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-728300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-728300 ssh -- ls /minikube-host: (1.0985668s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (148.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-323200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-323200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m25.8043983s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr: (2.2018336s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (148.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (27.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- rollout status deployment/busybox: (21.0773434s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-9pjv7 -- nslookup kubernetes.io
E0513 23:18:36.917516   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-9pjv7 -- nslookup kubernetes.io: (1.6370524s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-xlc6n -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-xlc6n -- nslookup kubernetes.io: (1.5469366s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-9pjv7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-xlc6n -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-9pjv7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-xlc6n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (27.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-9pjv7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-9pjv7 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-xlc6n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-323200 -- exec busybox-fc5497c4f-xlc6n -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-323200 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-323200 -v 3 --alsologtostderr: (48.7045013s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr: (2.9982803s)
--- PASS: TestMultiNode/serial/AddNode (51.70s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-323200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4457387s)
--- PASS: TestMultiNode/serial/ProfileList (1.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (38.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 status --output json --alsologtostderr: (2.7636135s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp testdata\cp-test.txt multinode-323200:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp testdata\cp-test.txt multinode-323200:/home/docker/cp-test.txt: (1.1293354s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test.txt": (1.1247132s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1957321928\001\cp-test_multinode-323200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1957321928\001\cp-test_multinode-323200.txt: (1.0924809s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test.txt": (1.1130634s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200:/home/docker/cp-test.txt multinode-323200-m02:/home/docker/cp-test_multinode-323200_multinode-323200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200:/home/docker/cp-test.txt multinode-323200-m02:/home/docker/cp-test_multinode-323200_multinode-323200-m02.txt: (1.650608s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test.txt": (1.079488s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test_multinode-323200_multinode-323200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test_multinode-323200_multinode-323200-m02.txt": (1.0989371s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200:/home/docker/cp-test.txt multinode-323200-m03:/home/docker/cp-test_multinode-323200_multinode-323200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200:/home/docker/cp-test.txt multinode-323200-m03:/home/docker/cp-test_multinode-323200_multinode-323200-m03.txt: (1.6409004s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test.txt": (1.0915816s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test_multinode-323200_multinode-323200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test_multinode-323200_multinode-323200-m03.txt": (1.1070242s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp testdata\cp-test.txt multinode-323200-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp testdata\cp-test.txt multinode-323200-m02:/home/docker/cp-test.txt: (1.1184303s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test.txt": (1.1268267s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1957321928\001\cp-test_multinode-323200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1957321928\001\cp-test_multinode-323200-m02.txt: (1.1306124s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test.txt": (1.1086323s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m02:/home/docker/cp-test.txt multinode-323200:/home/docker/cp-test_multinode-323200-m02_multinode-323200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m02:/home/docker/cp-test.txt multinode-323200:/home/docker/cp-test_multinode-323200-m02_multinode-323200.txt: (1.6403493s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test.txt": (1.0927527s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test_multinode-323200-m02_multinode-323200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test_multinode-323200-m02_multinode-323200.txt": (1.0862094s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m02:/home/docker/cp-test.txt multinode-323200-m03:/home/docker/cp-test_multinode-323200-m02_multinode-323200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m02:/home/docker/cp-test.txt multinode-323200-m03:/home/docker/cp-test_multinode-323200-m02_multinode-323200-m03.txt: (1.6237293s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test.txt": (1.0850275s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test_multinode-323200-m02_multinode-323200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test_multinode-323200-m02_multinode-323200-m03.txt": (1.0675339s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp testdata\cp-test.txt multinode-323200-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp testdata\cp-test.txt multinode-323200-m03:/home/docker/cp-test.txt: (1.0807171s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test.txt": (1.0753402s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1957321928\001\cp-test_multinode-323200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1957321928\001\cp-test_multinode-323200-m03.txt: (1.1390594s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test.txt": (1.0851464s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m03:/home/docker/cp-test.txt multinode-323200:/home/docker/cp-test_multinode-323200-m03_multinode-323200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m03:/home/docker/cp-test.txt multinode-323200:/home/docker/cp-test_multinode-323200-m03_multinode-323200.txt: (1.5874678s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test.txt": (1.1194794s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test_multinode-323200-m03_multinode-323200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200 "sudo cat /home/docker/cp-test_multinode-323200-m03_multinode-323200.txt": (1.0795569s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m03:/home/docker/cp-test.txt multinode-323200-m02:/home/docker/cp-test_multinode-323200-m03_multinode-323200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 cp multinode-323200-m03:/home/docker/cp-test.txt multinode-323200-m02:/home/docker/cp-test_multinode-323200-m03_multinode-323200-m02.txt: (1.6120686s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m03 "sudo cat /home/docker/cp-test.txt": (1.0868166s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test_multinode-323200-m03_multinode-323200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 ssh -n multinode-323200-m02 "sudo cat /home/docker/cp-test_multinode-323200-m03_multinode-323200-m02.txt": (1.0990615s)
--- PASS: TestMultiNode/serial/CopyFile (38.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (6.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 node stop m03: (2.0753186s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-323200 status: exit status 7 (2.1306916s)

                                                
                                                
-- stdout --
	multinode-323200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-323200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-323200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:20:18.601964    9604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr: exit status 7 (2.1065997s)

                                                
                                                
-- stdout --
	multinode-323200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-323200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-323200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:20:20.728598    8008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 23:20:20.815558    8008 out.go:291] Setting OutFile to fd 648 ...
	I0513 23:20:20.816614    8008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:20:20.816614    8008 out.go:304] Setting ErrFile to fd 236...
	I0513 23:20:20.816614    8008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:20:20.828980    8008 out.go:298] Setting JSON to false
	I0513 23:20:20.828980    8008 mustload.go:65] Loading cluster: multinode-323200
	I0513 23:20:20.828980    8008 notify.go:220] Checking for updates...
	I0513 23:20:20.830150    8008 config.go:182] Loaded profile config "multinode-323200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:20:20.830150    8008 status.go:255] checking status of multinode-323200 ...
	I0513 23:20:20.849094    8008 cli_runner.go:164] Run: docker container inspect multinode-323200 --format={{.State.Status}}
	I0513 23:20:21.015270    8008 status.go:330] multinode-323200 host status = "Running" (err=<nil>)
	I0513 23:20:21.015270    8008 host.go:66] Checking if "multinode-323200" exists ...
	I0513 23:20:21.025515    8008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-323200
	I0513 23:20:21.186960    8008 host.go:66] Checking if "multinode-323200" exists ...
	I0513 23:20:21.199597    8008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 23:20:21.207627    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-323200
	I0513 23:20:21.371871    8008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54453 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-323200\id_rsa Username:docker}
	I0513 23:20:21.512356    8008 ssh_runner.go:195] Run: systemctl --version
	I0513 23:20:21.539417    8008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:20:21.571540    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-323200
	I0513 23:20:21.729780    8008 kubeconfig.go:125] found "multinode-323200" server: "https://127.0.0.1:54458"
	I0513 23:20:21.729780    8008 api_server.go:166] Checking apiserver status ...
	I0513 23:20:21.740150    8008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:20:21.773892    8008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2303/cgroup
	I0513 23:20:21.793643    8008 api_server.go:182] apiserver freezer: "7:freezer:/docker/b7846749c738158c140772d7d4ec8e50b20777c68f0c42edf873f2642c5af040/kubepods/burstable/podcb9f524892813cd0ef984e221b0bfb1b/cdb1999abecd7dbbf324b205f43daa8f3b2494966fdc2bb0d19f9657dfe53ea7"
	I0513 23:20:21.804715    8008 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b7846749c738158c140772d7d4ec8e50b20777c68f0c42edf873f2642c5af040/kubepods/burstable/podcb9f524892813cd0ef984e221b0bfb1b/cdb1999abecd7dbbf324b205f43daa8f3b2494966fdc2bb0d19f9657dfe53ea7/freezer.state
	I0513 23:20:21.822898    8008 api_server.go:204] freezer state: "THAWED"
	I0513 23:20:21.822898    8008 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0513 23:20:21.833486    8008 api_server.go:279] https://127.0.0.1:54458/healthz returned 200:
	ok
	I0513 23:20:21.833486    8008 status.go:422] multinode-323200 apiserver status = Running (err=<nil>)
	I0513 23:20:21.833486    8008 status.go:257] multinode-323200 status: &{Name:multinode-323200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 23:20:21.833486    8008 status.go:255] checking status of multinode-323200-m02 ...
	I0513 23:20:21.854685    8008 cli_runner.go:164] Run: docker container inspect multinode-323200-m02 --format={{.State.Status}}
	I0513 23:20:22.009017    8008 status.go:330] multinode-323200-m02 host status = "Running" (err=<nil>)
	I0513 23:20:22.009017    8008 host.go:66] Checking if "multinode-323200-m02" exists ...
	I0513 23:20:22.021515    8008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-323200-m02
	I0513 23:20:22.182339    8008 host.go:66] Checking if "multinode-323200-m02" exists ...
	I0513 23:20:22.196595    8008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 23:20:22.206406    8008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-323200-m02
	I0513 23:20:22.369048    8008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54508 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-323200-m02\id_rsa Username:docker}
	I0513 23:20:22.514468    8008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:20:22.544894    8008 status.go:257] multinode-323200-m02 status: &{Name:multinode-323200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0513 23:20:22.544894    8008 status.go:255] checking status of multinode-323200-m03 ...
	I0513 23:20:22.562258    8008 cli_runner.go:164] Run: docker container inspect multinode-323200-m03 --format={{.State.Status}}
	I0513 23:20:22.729579    8008 status.go:330] multinode-323200-m03 host status = "Stopped" (err=<nil>)
	I0513 23:20:22.729579    8008 status.go:343] host is not running, skipping remaining checks
	I0513 23:20:22.729579    8008 status.go:257] multinode-323200-m03 status: &{Name:multinode-323200-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (6.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (20.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 node start m03 -v=7 --alsologtostderr
E0513 23:20:29.997068   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 node start m03 -v=7 --alsologtostderr: (17.4967936s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 status -v=7 --alsologtostderr: (2.7511266s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (20.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-323200
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-323200
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-323200: (25.7601094s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-323200 --wait=true -v=8 --alsologtostderr
E0513 23:21:40.100141   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-323200 --wait=true -v=8 --alsologtostderr: (1m37.6663816s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-323200
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (13.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 node delete m03: (10.932612s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr: (1.9428313s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (13.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 stop: (23.9522959s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-323200 status: exit status 7 (574.3444ms)

                                                
                                                
-- stdout --
	multinode-323200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-323200-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:23:24.363725    7724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr: exit status 7 (564.2519ms)

                                                
                                                
-- stdout --
	multinode-323200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-323200-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:23:24.942219    6304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 23:23:25.019831    6304 out.go:291] Setting OutFile to fd 792 ...
	I0513 23:23:25.020374    6304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:23:25.020374    6304 out.go:304] Setting ErrFile to fd 572...
	I0513 23:23:25.020374    6304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:23:25.034780    6304 out.go:298] Setting JSON to false
	I0513 23:23:25.034889    6304 mustload.go:65] Loading cluster: multinode-323200
	I0513 23:23:25.035000    6304 notify.go:220] Checking for updates...
	I0513 23:23:25.035162    6304 config.go:182] Loaded profile config "multinode-323200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:23:25.035162    6304 status.go:255] checking status of multinode-323200 ...
	I0513 23:23:25.054313    6304 cli_runner.go:164] Run: docker container inspect multinode-323200 --format={{.State.Status}}
	I0513 23:23:25.212082    6304 status.go:330] multinode-323200 host status = "Stopped" (err=<nil>)
	I0513 23:23:25.212082    6304 status.go:343] host is not running, skipping remaining checks
	I0513 23:23:25.212082    6304 status.go:257] multinode-323200 status: &{Name:multinode-323200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 23:23:25.212082    6304 status.go:255] checking status of multinode-323200-m02 ...
	I0513 23:23:25.229861    6304 cli_runner.go:164] Run: docker container inspect multinode-323200-m02 --format={{.State.Status}}
	I0513 23:23:25.383499    6304 status.go:330] multinode-323200-m02 host status = "Stopped" (err=<nil>)
	I0513 23:23:25.383499    6304 status.go:343] host is not running, skipping remaining checks
	I0513 23:23:25.383499    6304 status.go:257] multinode-323200-m02 status: &{Name:multinode-323200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-323200 --wait=true -v=8 --alsologtostderr --driver=docker
E0513 23:23:36.921983   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-323200 --wait=true -v=8 --alsologtostderr --driver=docker: (48.612491s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-323200 status --alsologtostderr: (1.9768412s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (64.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-323200
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-323200-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-323200-m02 --driver=docker: exit status 14 (301.0235ms)

                                                
                                                
-- stdout --
	* [multinode-323200-m02] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:24:16.747943    5208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Profile name 'multinode-323200-m02' is duplicated with machine name 'multinode-323200-m02' in profile 'multinode-323200'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-323200-m03 --driver=docker
E0513 23:25:13.199358   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-323200-m03 --driver=docker: (56.9939773s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-323200
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-323200: exit status 80 (1.1586837s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-323200 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:25:14.032415    8652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-323200-m03 already exists in multinode-323200-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_28.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-323200-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-323200-m03: (5.3543842s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (64.03s)

                                                
                                    
x
+
TestPreload (182.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-875300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-875300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (1m58.3090918s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-875300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-875300 image pull gcr.io/k8s-minikube/busybox: (1.9262532s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-875300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-875300: (12.2896604s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-875300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-875300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (42.5108962s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-875300 image list
helpers_test.go:175: Cleaning up "test-preload-875300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-875300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-875300: (6.2168438s)
--- PASS: TestPreload (182.08s)

                                                
                                    
x
+
TestScheduledStopWindows (148.23s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-256700 --memory=2048 --driver=docker
E0513 23:28:36.940385   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-256700 --memory=2048 --driver=docker: (1m17.7862268s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-256700 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-256700 --schedule 5m: (1.4151612s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-256700 -n scheduled-stop-256700
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-256700 -n scheduled-stop-256700: (1.289154s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-256700 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-256700 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.2925446s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-256700 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-256700 --schedule 5s: (1.4485206s)
E0513 23:30:30.024802   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-256700
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-256700: exit status 7 (408.0136ms)

                                                
                                                
-- stdout --
	scheduled-stop-256700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:30:55.673906    1016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-256700 -n scheduled-stop-256700
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-256700 -n scheduled-stop-256700: exit status 7 (387.1014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:30:56.071841    8860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-256700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-256700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-256700: (4.18668s)
--- PASS: TestScheduledStopWindows (148.23s)

                                                
                                    
x
+
TestInsufficientStorage (45.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-881500 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-881500 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (39.0884818s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f469aa8e-a474-42ad-ab19-fc636f6f9a47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-881500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1513c562-8eb1-4843-b829-1e4b7d70195b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"cd8deebe-8b45-4a43-8239-6688132850e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3ff1f854-02fe-4eb7-a541-879186896d00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"54a44c59-667a-41f3-a668-880c66eabdf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18872"}}
	{"specversion":"1.0","id":"3d3acedc-f5e9-47ff-900c-f4b92c40eed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"57bb74db-5c7b-4046-994f-a4523e3fa369","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"83dfff86-85fc-4477-90a7-4e70fd24b29b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7d1dc4b3-3b65-4c79-b874-4aeeb1df3c70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b96e273c-9634-44a9-88a3-723082eb7590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"cbebc33a-9cb7-4c7e-9dc3-1ca9bfb3a2d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-881500\" primary control-plane node in \"insufficient-storage-881500\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6320f672-2dae-4fff-94a7-6de1dde428fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcb03e04-c508-4d8e-8ad0-42fb03022414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"99a29cec-7130-4f09-bf31-1eb3917b2360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:31:00.650472    6968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-881500 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-881500 --output=json --layout=cluster: exit status 7 (1.1476936s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-881500","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-881500","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:31:39.740982    3720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0513 23:31:40.732915    3720 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-881500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-881500 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-881500 --output=json --layout=cluster: exit status 7 (1.1333916s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-881500","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-881500","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:31:40.887104    2416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0513 23:31:41.870401    2416 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-881500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E0513 23:31:41.902655    2416 status.go:560] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-881500\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-881500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-881500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-881500: (4.3269373s)
--- PASS: TestInsufficientStorage (45.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (246.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.2207913997.exe start -p running-upgrade-219900 --memory=2200 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.2207913997.exe start -p running-upgrade-219900 --memory=2200 --vm-driver=docker: (2m0.570916s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-219900 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-219900 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m57.7708071s)
helpers_test.go:175: Cleaning up "running-upgrade-219900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-219900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-219900: (7.1102706s)
--- PASS: TestRunningBinaryUpgrade (246.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (554.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-020200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-020200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (2m6.1325381s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-020200
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-020200: (5.8556427s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-020200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-020200 status --format={{.Host}}: exit status 7 (485.742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:36:40.060268    6944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-020200 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-020200 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker: (6m4.1392482s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-020200 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-020200 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-020200 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (333.9926ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-020200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:42:44.922482   11980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-020200
	    minikube start -p kubernetes-upgrade-020200 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0202002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-020200 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-020200 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-020200 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker: (48.4311667s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-020200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-020200
E0513 23:43:36.984381   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-020200: (8.8080085s)
--- PASS: TestKubernetesUpgrade (554.40s)

                                                
                                    
x
+
TestMissingContainerUpgrade (389.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.3683341189.exe start -p missing-upgrade-056100 --memory=2200 --driver=docker
E0513 23:33:36.952623   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.3683341189.exe start -p missing-upgrade-056100 --memory=2200 --driver=docker: (3m39.3327519s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-056100
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-056100: (21.3403777s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-056100
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-056100 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-056100 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m20.9867975s)
helpers_test.go:175: Cleaning up "missing-upgrade-056100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-056100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-056100: (6.3097467s)
--- PASS: TestMissingContainerUpgrade (389.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (298.2101ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-384200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:31:46.360377    6020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (123.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --driver=docker: (2m2.3893067s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-384200 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-384200 status -o json: (1.5615348s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (123.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (44.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --no-kubernetes --driver=docker: (36.1713433s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-384200 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-384200 status -o json: exit status 2 (1.6831053s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-384200","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:34:26.812798    8100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-384200
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-384200: (6.1753803s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (44.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --no-kubernetes --driver=docker: (36.1773732s)
--- PASS: TestNoKubernetes/serial/Start (36.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (216.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.2673727040.exe start -p stopped-upgrade-220700 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.2673727040.exe start -p stopped-upgrade-220700 --memory=2200 --vm-driver=docker: (1m28.9673906s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.2673727040.exe -p stopped-upgrade-220700 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.2673727040.exe -p stopped-upgrade-220700 stop: (22.0993691s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-220700 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-220700 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m45.0397529s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (216.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-384200 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-384200 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.3653567s)

                                                
                                                
** stderr ** 
	W0513 23:35:10.831834    5988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (13.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (8.184393s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (4.9572185s)
--- PASS: TestNoKubernetes/serial/ProfileList (13.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-384200
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-384200: (3.4471723s)
--- PASS: TestNoKubernetes/serial/Stop (3.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --driver=docker
E0513 23:35:30.039644   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-384200 --driver=docker: (21.783007s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-384200 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-384200 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.4375594s)

                                                
                                                
** stderr ** 
	W0513 23:35:50.624603   13020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-220700
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-220700: (3.9710113s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.97s)

                                                
                                    
x
+
TestPause/serial/Start (114.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-530200 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-530200 --memory=2048 --install-addons=false --wait=all --driver=docker: (1m54.9167037s)
--- PASS: TestPause/serial/Start (114.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-530200 --alsologtostderr -v=1 --driver=docker
E0513 23:40:30.067051   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-530200 --alsologtostderr -v=1 --driver=docker: (49.3664455s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (49.39s)

                                                
                                    
x
+
TestPause/serial/Pause (1.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-530200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-530200 --alsologtostderr -v=5: (1.7295808s)
--- PASS: TestPause/serial/Pause (1.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-530200 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-530200 --output=json --layout=cluster: exit status 2 (1.4261273s)

                                                
                                                
-- stdout --
	{"Name":"pause-530200","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-530200","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:41:02.157636    1424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (1.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.55s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-530200 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-530200 --alsologtostderr -v=5: (1.5529309s)
--- PASS: TestPause/serial/Unpause (1.55s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-530200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-530200 --alsologtostderr -v=5: (2.0462325s)
--- PASS: TestPause/serial/PauseAgain (2.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (6.35s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-530200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-530200 --alsologtostderr -v=5: (6.3437162s)
--- PASS: TestPause/serial/DeletePaused (6.35s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (19.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (19.07252s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-530200
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-530200: exit status 1 (160.1507ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-530200: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (19.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (251.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-873100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-873100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (4m11.5880666s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (251.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (149.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-561500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-561500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.30.0: (2m29.5802286s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (149.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (109.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-524600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-524600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.0: (1m49.5633857s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (109.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (129.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-062300 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.0
E0513 23:45:30.070035   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-062300 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.0: (2m9.9451105s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (129.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-524600 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6758f04-337c-461f-a3fb-dddd132d2095] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e6758f04-337c-461f-a3fb-dddd132d2095] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0193323s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-524600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-524600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-524600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.304843s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-524600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-561500 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d3cb1a04-6013-43fb-86e5-1ed9481fb643] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d3cb1a04-6013-43fb-86e5-1ed9481fb643] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.01158s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-561500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-524600 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-524600 --alsologtostderr -v=3: (12.8830966s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-561500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-561500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1620028s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-561500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-062300 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b8fc9878-ac8a-40ee-871b-acd9ee501e7b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b8fc9878-ac8a-40ee-871b-acd9ee501e7b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0085097s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-062300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-561500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-561500 --alsologtostderr -v=3: (13.0940595s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-524600 -n embed-certs-524600
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-524600 -n embed-certs-524600: exit status 7 (467.9251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:45:59.472316    2376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-524600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (283.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-524600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-524600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.0: (4m41.6461616s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-524600 -n embed-certs-524600
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-524600 -n embed-certs-524600: (1.4139634s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (283.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-062300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-062300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.3044347s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-062300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-062300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-062300 --alsologtostderr -v=3: (13.2810171s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-561500 -n no-preload-561500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-561500 -n no-preload-561500: exit status 7 (487.0163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:46:10.696735   10872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-561500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-561500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-561500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.30.0: (4m55.9978281s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-561500 -n no-preload-561500
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-561500 -n no-preload-561500: (2.0825473s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300: exit status 7 (459.9555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:46:22.956900    9768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-062300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-062300 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-062300 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.0: (4m56.2803241s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-062300 -n default-k8s-diff-port-062300: (1.9880106s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-873100 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e8f5f477-853d-4b77-89c7-2a1c21ac32b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e8f5f477-853d-4b77-89c7-2a1c21ac32b4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.2486287s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-873100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-873100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-873100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.3887836s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-873100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-873100 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-873100 --alsologtostderr -v=3: (12.8992228s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-873100 -n old-k8s-version-873100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-873100 -n old-k8s-version-873100: exit status 7 (411.2086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:47:49.767342   11992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-873100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-m5pdf" [469b7978-ce07-47b8-a1d3-57ecb8ef6ef8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0165532s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-m5pdf" [469b7978-ce07-47b8-a1d3-57ecb8ef6ef8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0277921s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-524600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-524600 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (11.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-524600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-524600 --alsologtostderr -v=1: (1.9615134s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-524600 -n embed-certs-524600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-524600 -n embed-certs-524600: exit status 2 (1.6213294s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:50:58.020917   10264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-524600 -n embed-certs-524600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-524600 -n embed-certs-524600: exit status 2 (1.4797197s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:50:59.617613    3400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-524600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-524600 --alsologtostderr -v=1: (2.2068174s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-524600 -n embed-certs-524600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-524600 -n embed-certs-524600: (2.3130388s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-524600 -n embed-certs-524600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-524600 -n embed-certs-524600: (1.594844s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (11.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7n7wn" [90072b2e-bf44-45b9-bc3e-be0d73d781b8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0301003s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (107.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-949100 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-949100 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.30.0: (1m47.8214972s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (107.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7n7wn" [90072b2e-bf44-45b9-bc3e-be0d73d781b8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0304937s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-561500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-561500 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-561500 image list --format=json: (1.285783s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rm6z4" [fab3fd8c-a727-46ff-b240-1eafc93b8339] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0239293s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-561500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-561500 --alsologtostderr -v=1: (1.9447506s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-561500 -n no-preload-561500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-561500 -n no-preload-561500: exit status 2 (1.453668s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:51:24.795241    7328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-561500 -n no-preload-561500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-561500 -n no-preload-561500: exit status 2 (1.4502884s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:51:26.247731    6716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-561500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-561500 --alsologtostderr -v=1: (1.7619226s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-561500 -n no-preload-561500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-561500 -n no-preload-561500: (1.802216s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-561500 -n no-preload-561500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-561500 -n no-preload-561500: (1.6536285s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (10.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rm6z4" [fab3fd8c-a727-46ff-b240-1eafc93b8339] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0253514s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-062300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-062300 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-062300 image list --format=json: (1.0763926s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (105.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m45.8057689s)
--- PASS: TestNetworkPlugins/group/auto/Start (105.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (125.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (2m5.9551455s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (125.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-949100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-949100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.3909395s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-949100 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-949100 --alsologtostderr -v=3: (8.2006133s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-949100 -n newest-cni-949100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-949100 -n newest-cni-949100: exit status 7 (539.7779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:53:16.101478    9224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-949100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-949100 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-949100 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.30.0: (38.5343489s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-949100 -n newest-cni-949100
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-949100 -n newest-cni-949100: (1.6280077s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-589900 "pgrep -a kubelet": (1.5542281s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (20.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-589900 replace --force -f testdata\netcat-deployment.yaml
E0513 23:53:37.008112   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tsb6d" [bcb072ce-459d-4e62-b571-ad9ff51f7ce0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tsb6d" [bcb072ce-459d-4e62-b571-ad9ff51f7ce0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 20.0322115s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (20.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-949100 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-949100 image list --format=json: (1.1580108s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-589900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (13.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-949100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-949100 --alsologtostderr -v=1: (2.4980201s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-949100 -n newest-cni-949100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-949100 -n newest-cni-949100: exit status 2 (1.6301699s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:54:01.344193   11256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-949100 -n newest-cni-949100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-949100 -n newest-cni-949100: exit status 2 (1.5757232s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:54:02.950414    8740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-949100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-949100 --alsologtostderr -v=1: (2.4964288s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-949100 -n newest-cni-949100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-949100 -n newest-cni-949100: (3.0590539s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-949100 -n newest-cni-949100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-949100 -n newest-cni-949100: (2.6233807s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (13.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (221.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (3m41.5328037s)
--- PASS: TestNetworkPlugins/group/calico/Start (221.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wnl6s" [0e0ca8c1-951f-437a-b347-bad9fa18feb3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.029925s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-589900 "pgrep -a kubelet": (1.4955208s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (31.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-589900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tp4kf" [acb78b5f-ae30-4cc2-b7d1-e2b61a83e368] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tp4kf" [acb78b5f-ae30-4cc2-b7d1-e2b61a83e368] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 31.2115796s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (31.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s2jfh" [e40e56ea-ccf2-486d-92a9-ba3fbcf5cb14] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0302308s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s2jfh" [e40e56ea-ccf2-486d-92a9-ba3fbcf5cb14] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0227812s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-873100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-873100 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-873100 image list --format=json: (1.212754s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-589900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (13.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-873100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-873100 --alsologtostderr -v=1: (2.7448645s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-873100 -n old-k8s-version-873100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-873100 -n old-k8s-version-873100: exit status 2 (1.7568867s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:55:17.606651    3624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-873100 -n old-k8s-version-873100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-873100 -n old-k8s-version-873100: exit status 2 (1.556041s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:55:19.337872   10940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-873100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-873100 --alsologtostderr -v=1: (2.6428355s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-873100 -n old-k8s-version-873100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-873100 -n old-k8s-version-873100: (2.8313777s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-873100 -n old-k8s-version-873100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-873100 -n old-k8s-version-873100: (2.2168681s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (13.75s)
E0514 00:00:45.748466   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0514 00:00:56.682504   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0514 00:00:57.076342   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-589900\client.crt: The system cannot find the path specified.
E0514 00:01:13.553017   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0514 00:01:21.792758   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
E0514 00:01:24.805532   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (151.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (2m31.1976689s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (151.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (139.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
E0513 23:55:45.726426   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:45.741011   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:45.756404   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:45.788682   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:45.834685   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:45.928384   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:46.100056   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:46.426129   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:47.079075   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:48.362812   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:50.935541   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:56.058645   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:55:56.669399   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:56.674690   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:56.690277   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:56.721376   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:56.767804   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:56.848357   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:57.019825   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:57.341172   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:57.984500   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:55:59.265062   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:56:02.156573   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:56:06.309448   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:56:07.287673   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:56:17.539059   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:56:26.792777   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (2m19.6970368s)
--- PASS: TestNetworkPlugins/group/false/Start (139.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (115.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E0513 23:57:07.767613   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-561500\client.crt: The system cannot find the path specified.
E0513 23:57:19.007591   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-062300\client.crt: The system cannot find the path specified.
E0513 23:57:23.324476   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:23.341144   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:23.356725   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:23.387605   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:23.433271   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:23.528602   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:23.694268   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:24.027091   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:24.682092   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:25.975726   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:28.540738   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:33.662915   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
E0513 23:57:43.912456   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m55.8352708s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (115.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-589900 "pgrep -a kubelet": (1.5807295s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (20.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-589900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6f4gm" [86738504-769e-4ecd-960d-a2f0c00326e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6f4gm" [86738504-769e-4ecd-960d-a2f0c00326e3] Running
E0513 23:58:04.407247   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 20.0208255s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (20.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-589900 "pgrep -a kubelet": (1.4658713s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l55d8" [50a20bf9-752d-4cfc-b087-05348e41b00a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0350777s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (22.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-589900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2p6z9" [f14ef362-5232-43c5-b50c-4be3416161bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2p6z9" [f14ef362-5232-43c5-b50c-4be3416161bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 22.0136419s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (22.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-589900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-589900 "pgrep -a kubelet": (1.8515689s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (25.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-589900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-494vs" [8fe1ca87-78ae-4164-983f-1f645fbc37bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-494vs" [8fe1ca87-78ae-4164-983f-1f645fbc37bb] Running
E0513 23:58:33.312837   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-557700\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 25.0181397s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (25.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-589900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-589900 "pgrep -a kubelet": (1.6375521s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (26.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-589900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rbpxm" [f584b197-39d6-4694-af18-b9b37fa3c16c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0513 23:58:37.013271   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-950600\client.crt: The system cannot find the path specified.
E0513 23:58:37.809197   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
E0513 23:58:37.824362   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
E0513 23:58:37.839842   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
E0513 23:58:37.870781   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
E0513 23:58:37.916434   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
E0513 23:58:38.009306   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
E0513 23:58:38.179803   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
E0513 23:58:38.515423   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-6bc787d567-rbpxm" [f584b197-39d6-4694-af18-b9b37fa3c16c] Running
E0513 23:58:58.390332   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 25.0247456s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (26.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-589900 exec deployment/netcat -- nslookup kubernetes.default
E0513 23:58:39.160917   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/calico/DNS (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0513 23:58:40.451216   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-589900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (142.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E0513 23:59:45.371277   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-589900\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (2m22.9489371s)
--- PASS: TestNetworkPlugins/group/flannel/Start (142.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E0513 23:59:59.854020   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-589900\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m41.8161537s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (128.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E0514 00:00:16.113718   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-589900\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-589900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (2m8.3103793s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (128.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (2.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-589900 "pgrep -a kubelet": (2.0256368s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (2.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (19.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-589900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pq7ct" [ef06435b-ec63-40c4-a479-b26eb115beed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pq7ct" [ef06435b-ec63-40c4-a479-b26eb115beed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 19.016403s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (19.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-589900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gkmmk" [cc21084e-607f-400c-8340-2cd1f6558a85] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0229525s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-589900 "pgrep -a kubelet": (1.3373795s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (23.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-589900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2vkx7" [2202ce96-f277-4e98-8d7b-a8577584bdf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0514 00:02:19.013304   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-589900\client.crt: The system cannot find the path specified.
E0514 00:02:23.335523   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-873100\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-6bc787d567-2vkx7" [2202ce96-f277-4e98-8d7b-a8577584bdf8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 23.025095s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (23.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-589900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-589900 "pgrep -a kubelet": (1.5599338s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (22.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-589900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9xwnx" [ce0f1496-dcad-44ed-9f44-d3b53b2c1a06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9xwnx" [ce0f1496-dcad-44ed-9f44-d3b53b2c1a06] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 22.0270691s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (22.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-589900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
E0514 00:02:48.707605   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
net_test.go:175: (dbg) Run:  kubectl --context kubenet-589900 exec deployment/netcat -- nslookup kubernetes.default
E0514 00:02:48.722954   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
E0514 00:02:48.738393   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
E0514 00:02:48.768657   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
E0514 00:02:48.816142   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
E0514 00:02:48.908913   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
E0514 00:02:49.079965   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
E0514 00:02:49.417397   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-589900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0514 00:02:50.067384   15868 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-589900\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.51s)

                                                
                                    

Test skip (25/339)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 82.7456ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qlc52" [0e134caf-b84e-4b5d-9382-82ae529c47fa] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0180063s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5dp4z" [525cb14e-e008-4696-a19f-c15548901a09] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0192311s
addons_test.go:340: (dbg) Run:  kubectl --context addons-557700 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-557700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-557700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.1632294s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (19.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-557700 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context addons-557700 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (6.068188s)
addons_test.go:232: (dbg) Run:  kubectl --context addons-557700 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-557700 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3b5b6798-b14c-4266-b99c-500f426a21dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3b5b6798-b14c-4266-b99c-500f426a21dc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.0117285s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-557700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-557700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.2238314s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-557700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0513 22:31:01.450258    5668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (25.47s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-950600 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-950600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 7032: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-950600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-950600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2fltb" [e13d5ffa-04b7-4e0e-947e-b0de1ccf5b00] Pending
helpers_test.go:344: "hello-node-connect-57b4589c47-2fltb" [e13d5ffa-04b7-4e0e-947e-b0de1ccf5b00] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2fltb" [e13d5ffa-04b7-4e0e-947e-b0de1ccf5b00] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0516658s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.93s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-996400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-996400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-996400: (1.3631767s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (13.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-589900 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W0513 23:41:22.522228    4000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W0513 23:41:22.811831    4960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W0513 23:41:23.101541    8004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W0513 23:41:23.515756    9256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W0513 23:41:23.777563    8664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W0513 23:41:25.098261    4660 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W0513 23:41:25.318740    5792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W0513 23:41:25.555386   10264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W0513 23:41:25.786442   12084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W0513 23:41:26.036430    8440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-589900" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W0513 23:41:27.798940   16192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W0513 23:41:28.061695   12140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W0513 23:41:28.322502    9116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W0513 23:41:28.539837    5768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W0513 23:41:28.775414   15880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-589900

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W0513 23:41:29.263569    6760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W0513 23:41:29.508215   11176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W0513 23:41:29.731821   11760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W0513 23:41:29.968872   10400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W0513 23:41:30.192643    3236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W0513 23:41:30.431632    3336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W0513 23:41:30.674067   13000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W0513 23:41:30.930849   11516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W0513 23:41:31.179638    2296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W0513 23:41:31.453842   12312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W0513 23:41:31.721735   10392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W0513 23:41:31.960565   11440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W0513 23:41:32.216897   16132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W0513 23:41:32.520005   12048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W0513 23:41:32.774424   11520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W0513 23:41:33.060652   16220 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W0513 23:41:33.303443   12180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                

                                                
                                                
>>> host: crio config:
W0513 23:41:33.558775   11220 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube4\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-589900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589900"

                                                
                                                
----------------------- debugLogs end: cilium-589900 [took: 12.5555298s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-589900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-589900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-589900: (1.2872282s)
--- SKIP: TestNetworkPlugins/group/cilium (13.84s)

                                                
                                    
Copied to clipboard