Test Report: Docker_Windows 19875

                    
                      9b6a7d882f95daeab36015d5b0633b1bcea3cc50:2024-10-28:36842
                    
                

Test fail (3/342)

Order failed test Duration
58 TestErrorSpam/setup 62.23
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 5.51
372 TestStartStop/group/old-k8s-version/serial/SecondStart 409.75
x
+
TestErrorSpam/setup (62.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-883200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-883200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 --driver=docker: (1m2.2244771s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-883200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=19875
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-883200" primary control-plane node in "nospam-883200" cluster
* Pulling base image v0.0.45-1729876044-19868 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-883200" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (62.23s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-928900
helpers_test.go:235: (dbg) docker inspect functional-928900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60c030a11c8add26114fcf1baf965157b02c7bc681b9679c69c19fdc4ee4a783",
	        "Created": "2024-10-28T11:14:05.091537938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-28T11:14:05.395818167Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:05bcd996665116a573f1bc98d7e2b0a5da287feef26d621bbd294f87ee72c630",
	        "ResolvConfPath": "/var/lib/docker/containers/60c030a11c8add26114fcf1baf965157b02c7bc681b9679c69c19fdc4ee4a783/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60c030a11c8add26114fcf1baf965157b02c7bc681b9679c69c19fdc4ee4a783/hostname",
	        "HostsPath": "/var/lib/docker/containers/60c030a11c8add26114fcf1baf965157b02c7bc681b9679c69c19fdc4ee4a783/hosts",
	        "LogPath": "/var/lib/docker/containers/60c030a11c8add26114fcf1baf965157b02c7bc681b9679c69c19fdc4ee4a783/60c030a11c8add26114fcf1baf965157b02c7bc681b9679c69c19fdc4ee4a783-json.log",
	        "Name": "/functional-928900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-928900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-928900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3d0431647b8a9bde173989a61cce0359f2856bbb404a326d7bb142ccce6a728a-init/diff:/var/lib/docker/overlay2/56549ac06c27a2316e9ca3114510d52d2c5e1a27f1ba14da0e1cd8dee84d22ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d0431647b8a9bde173989a61cce0359f2856bbb404a326d7bb142ccce6a728a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d0431647b8a9bde173989a61cce0359f2856bbb404a326d7bb142ccce6a728a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d0431647b8a9bde173989a61cce0359f2856bbb404a326d7bb142ccce6a728a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-928900",
	                "Source": "/var/lib/docker/volumes/functional-928900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-928900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-928900",
	                "name.minikube.sigs.k8s.io": "functional-928900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7fe4fc4797d0d41acb7fa73d5fc347b9095dec6f803b34ac9a53afe5cdc166c1",
	            "SandboxKey": "/var/run/docker/netns/7fe4fc4797d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59549"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59550"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59551"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-928900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "75c7fd43bfc3d4e216c52332ee61506bb5ee9bef1b8ba5b92c86eb42568a6a90",
	                    "EndpointID": "64098f72fa044c05c8d064adee80cf5c9c9a48b6692f8e536eebb595b854031c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-928900",
	                        "60c030a11c8a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-928900 -n functional-928900
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 logs -n 25: (2.5742062s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-883200 --log_dir                                     | nospam-883200     | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:13 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-883200 --log_dir                                     | nospam-883200     | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:13 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-883200 --log_dir                                     | nospam-883200     | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:13 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-883200 --log_dir                                     | nospam-883200     | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:13 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-883200 --log_dir                                     | nospam-883200     | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:13 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-883200 --log_dir                                     | nospam-883200     | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:13 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-883200 --log_dir                                     | nospam-883200     | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:13 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-883200                                            | nospam-883200     | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:13 UTC |
	| start   | -p functional-928900                                        | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:13 UTC | 28 Oct 24 11:15 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-928900                                        | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-928900 cache add                                 | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-928900 cache add                                 | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:16 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-928900 cache add                                 | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-928900 cache add                                 | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | minikube-local-cache-test:functional-928900                 |                   |                   |         |                     |                     |
	| cache   | functional-928900 cache delete                              | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | minikube-local-cache-test:functional-928900                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	| ssh     | functional-928900 ssh sudo                                  | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-928900                                           | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-928900 ssh                                       | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-928900 cache reload                              | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	| ssh     | functional-928900 ssh                                       | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-928900 kubectl --                                | functional-928900 | minikube4\jenkins | v1.34.0 | 28 Oct 24 11:16 UTC | 28 Oct 24 11:16 UTC |
	|         | --context functional-928900                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:15:12
	Running on machine: minikube4
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:15:12.319260    4264 out.go:345] Setting OutFile to fd 1124 ...
	I1028 11:15:12.394376    4264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:15:12.394376    4264 out.go:358] Setting ErrFile to fd 1132...
	I1028 11:15:12.394376    4264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:15:12.418892    4264 out.go:352] Setting JSON to false
	I1028 11:15:12.421862    4264 start.go:129] hostinfo: {"hostname":"minikube4","uptime":1209,"bootTime":1730112903,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5073 Build 19045.5073","kernelVersion":"10.0.19045.5073 Build 19045.5073","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1028 11:15:12.422014    4264 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 11:15:12.425769    4264 out.go:177] * [functional-928900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	I1028 11:15:12.429931    4264 notify.go:220] Checking for updates...
	I1028 11:15:12.430133    4264 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 11:15:12.432845    4264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:15:12.435464    4264 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1028 11:15:12.438034    4264 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:15:12.440343    4264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:15:12.443544    4264 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:15:12.444460    4264 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:15:12.618488    4264 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.2 (167172)
	I1028 11:15:12.628291    4264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:15:12.933858    4264 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:79 SystemTime:2024-10-28 11:15:12.905440146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 11:15:12.939375    4264 out.go:177] * Using the docker driver based on existing profile
	I1028 11:15:12.942005    4264 start.go:297] selected driver: docker
	I1028 11:15:12.942005    4264 start.go:901] validating driver "docker" against &{Name:functional-928900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-928900 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:15:12.942633    4264 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:15:12.959277    4264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:15:13.280972    4264 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:79 SystemTime:2024-10-28 11:15:13.250921362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 11:15:13.388434    4264 cni.go:84] Creating CNI manager for ""
	I1028 11:15:13.388434    4264 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 11:15:13.388434    4264 start.go:340] cluster config:
	{Name:functional-928900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-928900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:15:13.392278    4264 out.go:177] * Starting "functional-928900" primary control-plane node in "functional-928900" cluster
	I1028 11:15:13.396538    4264 cache.go:121] Beginning downloading kic base image for docker with docker
	I1028 11:15:13.399928    4264 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1028 11:15:13.404315    4264 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:15:13.404389    4264 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 11:15:13.404506    4264 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 11:15:13.404506    4264 cache.go:56] Caching tarball of preloaded images
	I1028 11:15:13.404506    4264 preload.go:172] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 11:15:13.405143    4264 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:15:13.405264    4264 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\config.json ...
	I1028 11:15:13.510237    4264 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
	I1028 11:15:13.510237    4264 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
	I1028 11:15:13.510237    4264 cache.go:194] Successfully downloaded all kic artifacts
	I1028 11:15:13.511247    4264 start.go:360] acquireMachinesLock for functional-928900: {Name:mkfb513955e8bba12e7c25a87ed0d31f5d5952ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:15:13.511247    4264 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-928900"
	I1028 11:15:13.511247    4264 start.go:96] Skipping create...Using existing machine configuration
	I1028 11:15:13.511247    4264 fix.go:54] fixHost starting: 
	I1028 11:15:13.527995    4264 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
	I1028 11:15:13.596912    4264 fix.go:112] recreateIfNeeded on functional-928900: state=Running err=<nil>
	W1028 11:15:13.596912    4264 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 11:15:13.602906    4264 out.go:177] * Updating the running docker "functional-928900" container ...
	I1028 11:15:13.604901    4264 machine.go:93] provisionDockerMachine start ...
	I1028 11:15:13.612906    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:13.683905    4264 main.go:141] libmachine: Using SSH client type: native
	I1028 11:15:13.684919    4264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 59547 <nil> <nil>}
	I1028 11:15:13.684919    4264 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:15:13.856203    4264 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-928900
	
	I1028 11:15:13.856243    4264 ubuntu.go:169] provisioning hostname "functional-928900"
	I1028 11:15:13.865936    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:13.935550    4264 main.go:141] libmachine: Using SSH client type: native
	I1028 11:15:13.936549    4264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 59547 <nil> <nil>}
	I1028 11:15:13.936549    4264 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-928900 && echo "functional-928900" | sudo tee /etc/hostname
	I1028 11:15:14.128258    4264 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-928900
	
	I1028 11:15:14.138229    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:14.206778    4264 main.go:141] libmachine: Using SSH client type: native
	I1028 11:15:14.206778    4264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 59547 <nil> <nil>}
	I1028 11:15:14.206778    4264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-928900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-928900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-928900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:15:14.395379    4264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:15:14.395379    4264 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1028 11:15:14.395379    4264 ubuntu.go:177] setting up certificates
	I1028 11:15:14.395379    4264 provision.go:84] configureAuth start
	I1028 11:15:14.405208    4264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-928900
	I1028 11:15:14.472565    4264 provision.go:143] copyHostCerts
	I1028 11:15:14.472565    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1028 11:15:14.472565    4264 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1028 11:15:14.472565    4264 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1028 11:15:14.472565    4264 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1028 11:15:14.473559    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1028 11:15:14.474575    4264 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1028 11:15:14.474575    4264 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1028 11:15:14.474575    4264 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 11:15:14.477811    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1028 11:15:14.478488    4264 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1028 11:15:14.478584    4264 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1028 11:15:14.479000    4264 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 11:15:14.479746    4264 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-928900 san=[127.0.0.1 192.168.49.2 functional-928900 localhost minikube]
	I1028 11:15:14.594169    4264 provision.go:177] copyRemoteCerts
	I1028 11:15:14.600928    4264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:15:14.612851    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:14.692229    4264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
	I1028 11:15:14.833950    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1028 11:15:14.834914    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:15:14.878374    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1028 11:15:14.878492    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:15:14.929432    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1028 11:15:14.930419    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 11:15:14.974813    4264 provision.go:87] duration metric: took 579.426ms to configureAuth
	I1028 11:15:14.974883    4264 ubuntu.go:193] setting minikube options for container-runtime
	I1028 11:15:14.975421    4264 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:15:14.983919    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:15.058644    4264 main.go:141] libmachine: Using SSH client type: native
	I1028 11:15:15.059669    4264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 59547 <nil> <nil>}
	I1028 11:15:15.059669    4264 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 11:15:15.245731    4264 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1028 11:15:15.245731    4264 ubuntu.go:71] root file system type: overlay
	I1028 11:15:15.246392    4264 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 11:15:15.255818    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:15.341456    4264 main.go:141] libmachine: Using SSH client type: native
	I1028 11:15:15.342482    4264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 59547 <nil> <nil>}
	I1028 11:15:15.342482    4264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 11:15:15.542963    4264 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 11:15:15.554776    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:15.632843    4264 main.go:141] libmachine: Using SSH client type: native
	I1028 11:15:15.632843    4264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 59547 <nil> <nil>}
	I1028 11:15:15.632843    4264 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 11:15:15.814549    4264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:15:15.814549    4264 machine.go:96] duration metric: took 2.2096173s to provisionDockerMachine
	I1028 11:15:15.814549    4264 start.go:293] postStartSetup for "functional-928900" (driver="docker")
	I1028 11:15:15.814549    4264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:15:15.827846    4264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:15:15.836767    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:15.912224    4264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
	I1028 11:15:16.064387    4264 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:15:16.075302    4264 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I1028 11:15:16.075302    4264 command_runner.go:130] > NAME="Ubuntu"
	I1028 11:15:16.075302    4264 command_runner.go:130] > VERSION_ID="22.04"
	I1028 11:15:16.075302    4264 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I1028 11:15:16.075302    4264 command_runner.go:130] > VERSION_CODENAME=jammy
	I1028 11:15:16.075302    4264 command_runner.go:130] > ID=ubuntu
	I1028 11:15:16.075302    4264 command_runner.go:130] > ID_LIKE=debian
	I1028 11:15:16.075302    4264 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1028 11:15:16.075302    4264 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1028 11:15:16.075302    4264 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1028 11:15:16.075302    4264 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1028 11:15:16.075302    4264 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1028 11:15:16.075302    4264 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 11:15:16.075302    4264 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 11:15:16.075834    4264 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 11:15:16.076653    4264 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 11:15:16.076653    4264 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1028 11:15:16.076653    4264 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1028 11:15:16.077861    4264 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem -> 111762.pem in /etc/ssl/certs
	I1028 11:15:16.077861    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem -> /etc/ssl/certs/111762.pem
	I1028 11:15:16.079162    4264 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11176\hosts -> hosts in /etc/test/nested/copy/11176
	I1028 11:15:16.079162    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11176\hosts -> /etc/test/nested/copy/11176/hosts
	I1028 11:15:16.094820    4264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11176
	I1028 11:15:16.116342    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem --> /etc/ssl/certs/111762.pem (1708 bytes)
	I1028 11:15:16.162146    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11176\hosts --> /etc/test/nested/copy/11176/hosts (40 bytes)
	I1028 11:15:16.202277    4264 start.go:296] duration metric: took 387.7236ms for postStartSetup
	I1028 11:15:16.216930    4264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:15:16.229217    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:16.302594    4264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
	I1028 11:15:16.418874    4264 command_runner.go:130] > 1%
	I1028 11:15:16.432093    4264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 11:15:16.444888    4264 command_runner.go:130] > 951G
	I1028 11:15:16.445589    4264 fix.go:56] duration metric: took 2.9343017s for fixHost
	I1028 11:15:16.445589    4264 start.go:83] releasing machines lock for "functional-928900", held for 2.9343017s
	I1028 11:15:16.454458    4264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-928900
	I1028 11:15:16.528317    4264 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 11:15:16.539752    4264 ssh_runner.go:195] Run: cat /version.json
	I1028 11:15:16.542317    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:16.550367    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:16.609750    4264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
	I1028 11:15:16.609750    4264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
	I1028 11:15:16.724656    4264 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1729876044-19868", "minikube_version": "v1.34.0", "commit": "64f7d94c4c282cf6ab569a8ab99d2722282b22c9"}
	I1028 11:15:16.733081    4264 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1028 11:15:16.733081    4264 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 11:15:16.736411    4264 ssh_runner.go:195] Run: systemctl --version
	I1028 11:15:16.749352    4264 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I1028 11:15:16.749352    4264 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1028 11:15:16.761281    4264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:15:16.774306    4264 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1028 11:15:16.774306    4264 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1028 11:15:16.774306    4264 command_runner.go:130] > Device: 89h/137d	Inode: 240         Links: 1
	I1028 11:15:16.774306    4264 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 11:15:16.774306    4264 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1028 11:15:16.774306    4264 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1028 11:15:16.774306    4264 command_runner.go:130] > Change: 2024-10-28 11:00:48.838624228 +0000
	I1028 11:15:16.774306    4264 command_runner.go:130] >  Birth: 2024-10-28 11:00:48.838624228 +0000
	I1028 11:15:16.787324    4264 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1028 11:15:16.803660    4264 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W1028 11:15:16.804827    4264 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I1028 11:15:16.816168    4264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:15:16.834907    4264 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 11:15:16.834907    4264 start.go:495] detecting cgroup driver to use...
	I1028 11:15:16.834907    4264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 11:15:16.835774    4264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1028 11:15:16.837612    4264 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1028 11:15:16.837612    4264 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 11:15:16.870904    4264 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1028 11:15:16.883740    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 11:15:16.919409    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 11:15:16.942057    4264 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 11:15:16.953349    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 11:15:16.987997    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:15:17.019845    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 11:15:17.053706    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 11:15:17.090262    4264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:15:17.120893    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 11:15:17.155046    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 11:15:17.189009    4264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 11:15:17.224563    4264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:15:17.246921    4264 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1028 11:15:17.257860    4264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:15:17.291209    4264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:15:17.471938    4264 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 11:15:27.828299    4264 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.3562201s)
	I1028 11:15:27.828469    4264 start.go:495] detecting cgroup driver to use...
	I1028 11:15:27.828498    4264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 11:15:27.838862    4264 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 11:15:27.867278    4264 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1028 11:15:27.867278    4264 command_runner.go:130] > [Unit]
	I1028 11:15:27.867278    4264 command_runner.go:130] > Description=Docker Application Container Engine
	I1028 11:15:27.867278    4264 command_runner.go:130] > Documentation=https://docs.docker.com
	I1028 11:15:27.867278    4264 command_runner.go:130] > BindsTo=containerd.service
	I1028 11:15:27.867278    4264 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1028 11:15:27.867278    4264 command_runner.go:130] > Wants=network-online.target
	I1028 11:15:27.867278    4264 command_runner.go:130] > Requires=docker.socket
	I1028 11:15:27.867278    4264 command_runner.go:130] > StartLimitBurst=3
	I1028 11:15:27.867278    4264 command_runner.go:130] > StartLimitIntervalSec=60
	I1028 11:15:27.867278    4264 command_runner.go:130] > [Service]
	I1028 11:15:27.867278    4264 command_runner.go:130] > Type=notify
	I1028 11:15:27.867278    4264 command_runner.go:130] > Restart=on-failure
	I1028 11:15:27.867278    4264 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1028 11:15:27.867278    4264 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1028 11:15:27.867278    4264 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1028 11:15:27.867278    4264 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1028 11:15:27.867278    4264 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1028 11:15:27.867278    4264 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1028 11:15:27.867278    4264 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1028 11:15:27.867278    4264 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1028 11:15:27.867278    4264 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1028 11:15:27.867278    4264 command_runner.go:130] > ExecStart=
	I1028 11:15:27.867820    4264 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1028 11:15:27.867820    4264 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1028 11:15:27.867820    4264 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1028 11:15:27.867875    4264 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1028 11:15:27.867875    4264 command_runner.go:130] > LimitNOFILE=infinity
	I1028 11:15:27.867912    4264 command_runner.go:130] > LimitNPROC=infinity
	I1028 11:15:27.867912    4264 command_runner.go:130] > LimitCORE=infinity
	I1028 11:15:27.867912    4264 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1028 11:15:27.867946    4264 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1028 11:15:27.867946    4264 command_runner.go:130] > TasksMax=infinity
	I1028 11:15:27.867973    4264 command_runner.go:130] > TimeoutStartSec=0
	I1028 11:15:27.868001    4264 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1028 11:15:27.868001    4264 command_runner.go:130] > Delegate=yes
	I1028 11:15:27.868001    4264 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1028 11:15:27.868001    4264 command_runner.go:130] > KillMode=process
	I1028 11:15:27.868060    4264 command_runner.go:130] > [Install]
	I1028 11:15:27.868102    4264 command_runner.go:130] > WantedBy=multi-user.target
	I1028 11:15:27.868213    4264 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1028 11:15:27.879037    4264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 11:15:27.900030    4264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:15:27.939361    4264 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1028 11:15:27.949960    4264 ssh_runner.go:195] Run: which cri-dockerd
	I1028 11:15:27.960975    4264 command_runner.go:130] > /usr/bin/cri-dockerd
	I1028 11:15:27.971317    4264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 11:15:27.989477    4264 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 11:15:28.050723    4264 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 11:15:28.252492    4264 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 11:15:28.433113    4264 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 11:15:28.433113    4264 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 11:15:28.481389    4264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:15:28.656877    4264 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 11:15:29.549203    4264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 11:15:29.588541    4264 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1028 11:15:29.633837    4264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:15:29.670432    4264 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 11:15:29.835001    4264 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 11:15:29.985660    4264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:15:30.138742    4264 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 11:15:30.175943    4264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 11:15:30.215803    4264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:15:30.345380    4264 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 11:15:30.499812    4264 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 11:15:30.513467    4264 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 11:15:30.528030    4264 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1028 11:15:30.528030    4264 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 11:15:30.528030    4264 command_runner.go:130] > Device: 92h/146d	Inode: 722         Links: 1
	I1028 11:15:30.528030    4264 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1028 11:15:30.528030    4264 command_runner.go:130] > Access: 2024-10-28 11:15:30.355196105 +0000
	I1028 11:15:30.528030    4264 command_runner.go:130] > Modify: 2024-10-28 11:15:30.355196105 +0000
	I1028 11:15:30.528030    4264 command_runner.go:130] > Change: 2024-10-28 11:15:30.365196783 +0000
	I1028 11:15:30.528030    4264 command_runner.go:130] >  Birth: -
	I1028 11:15:30.528030    4264 start.go:563] Will wait 60s for crictl version
	I1028 11:15:30.539970    4264 ssh_runner.go:195] Run: which crictl
	I1028 11:15:30.552545    4264 command_runner.go:130] > /usr/bin/crictl
	I1028 11:15:30.562777    4264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:15:30.635202    4264 command_runner.go:130] > Version:  0.1.0
	I1028 11:15:30.635202    4264 command_runner.go:130] > RuntimeName:  docker
	I1028 11:15:30.635202    4264 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1028 11:15:30.635202    4264 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 11:15:30.635202    4264 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 11:15:30.646281    4264 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:15:30.700918    4264 command_runner.go:130] > 27.3.1
	I1028 11:15:30.710000    4264 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 11:15:30.760778    4264 command_runner.go:130] > 27.3.1
	I1028 11:15:30.764261    4264 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 11:15:30.774459    4264 cli_runner.go:164] Run: docker exec -t functional-928900 dig +short host.docker.internal
	I1028 11:15:30.952571    4264 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1028 11:15:30.964273    4264 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1028 11:15:30.975867    4264 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1028 11:15:30.985694    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:31.059874    4264 kubeadm.go:883] updating cluster {Name:functional-928900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-928900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:15:31.059874    4264 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:15:31.068868    4264 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 11:15:31.108472    4264 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.2
	I1028 11:15:31.108472    4264 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.2
	I1028 11:15:31.108472    4264 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 11:15:31.108596    4264 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.2
	I1028 11:15:31.108596    4264 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1028 11:15:31.108596    4264 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1028 11:15:31.108596    4264 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1028 11:15:31.108596    4264 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:15:31.113021    4264 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 11:15:31.113128    4264 docker.go:619] Images already preloaded, skipping extraction
	I1028 11:15:31.122612    4264 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 11:15:31.164851    4264 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.2
	I1028 11:15:31.164918    4264 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 11:15:31.164987    4264 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.2
	I1028 11:15:31.165056    4264 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.2
	I1028 11:15:31.165056    4264 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1028 11:15:31.165126    4264 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1028 11:15:31.165126    4264 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1028 11:15:31.165126    4264 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:15:31.165285    4264 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 11:15:31.165285    4264 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:15:31.165408    4264 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.2 docker true true} ...
	I1028 11:15:31.165789    4264 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-928900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:functional-928900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:15:31.177774    4264 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 11:15:31.278073    4264 command_runner.go:130] > cgroupfs
	I1028 11:15:31.278073    4264 cni.go:84] Creating CNI manager for ""
	I1028 11:15:31.278073    4264 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 11:15:31.278073    4264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:15:31.278073    4264 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-928900 NodeName:functional-928900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:15:31.278726    4264 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-928900"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:15:31.291813    4264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:15:31.314753    4264 command_runner.go:130] > kubeadm
	I1028 11:15:31.314753    4264 command_runner.go:130] > kubectl
	I1028 11:15:31.314753    4264 command_runner.go:130] > kubelet
	I1028 11:15:31.314753    4264 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:15:31.328133    4264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:15:31.349970    4264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 11:15:31.383709    4264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:15:31.414652    4264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1028 11:15:31.461724    4264 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1028 11:15:31.477894    4264 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1028 11:15:31.488770    4264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:15:31.654457    4264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:15:31.680079    4264 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900 for IP: 192.168.49.2
	I1028 11:15:31.680079    4264 certs.go:194] generating shared ca certs ...
	I1028 11:15:31.680079    4264 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:15:31.680079    4264 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1028 11:15:31.681485    4264 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1028 11:15:31.681688    4264 certs.go:256] generating profile certs ...
	I1028 11:15:31.682650    4264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\client.key
	I1028 11:15:31.684338    4264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\apiserver.key.49f168d6
	I1028 11:15:31.684677    4264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\proxy-client.key
	I1028 11:15:31.684677    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:15:31.685413    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:15:31.685651    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:15:31.685651    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:15:31.686219    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:15:31.686430    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:15:31.686721    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:15:31.686811    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:15:31.687346    4264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176.pem (1338 bytes)
	W1028 11:15:31.687671    4264 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176_empty.pem, impossibly tiny 0 bytes
	I1028 11:15:31.687737    4264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1028 11:15:31.687737    4264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 11:15:31.688256    4264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 11:15:31.688331    4264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1028 11:15:31.688861    4264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem (1708 bytes)
	I1028 11:15:31.689178    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:15:31.689178    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176.pem -> /usr/share/ca-certificates/11176.pem
	I1028 11:15:31.689178    4264 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem -> /usr/share/ca-certificates/111762.pem
	I1028 11:15:31.690563    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:15:31.737996    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 11:15:31.784564    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:15:31.829834    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 11:15:31.877749    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:15:31.919165    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 11:15:31.962766    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:15:32.009296    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-928900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:15:32.053015    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:15:32.096891    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176.pem --> /usr/share/ca-certificates/11176.pem (1338 bytes)
	I1028 11:15:32.141316    4264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem --> /usr/share/ca-certificates/111762.pem (1708 bytes)
	I1028 11:15:32.183380    4264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:15:32.230277    4264 ssh_runner.go:195] Run: openssl version
	I1028 11:15:32.240801    4264 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1028 11:15:32.252104    4264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111762.pem && ln -fs /usr/share/ca-certificates/111762.pem /etc/ssl/certs/111762.pem"
	I1028 11:15:32.286526    4264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111762.pem
	I1028 11:15:32.299858    4264 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 11:13 /usr/share/ca-certificates/111762.pem
	I1028 11:15:32.299915    4264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:13 /usr/share/ca-certificates/111762.pem
	I1028 11:15:32.311163    4264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111762.pem
	I1028 11:15:32.328912    4264 command_runner.go:130] > 3ec20f2e
	I1028 11:15:32.339180    4264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111762.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:15:32.370494    4264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:15:32.406220    4264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:15:32.418626    4264 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 11:02 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:15:32.418688    4264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:02 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:15:32.430204    4264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:15:32.446972    4264 command_runner.go:130] > b5213941
	I1028 11:15:32.457092    4264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:15:32.490115    4264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11176.pem && ln -fs /usr/share/ca-certificates/11176.pem /etc/ssl/certs/11176.pem"
	I1028 11:15:32.521872    4264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11176.pem
	I1028 11:15:32.535698    4264 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 11:13 /usr/share/ca-certificates/11176.pem
	I1028 11:15:32.535698    4264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:13 /usr/share/ca-certificates/11176.pem
	I1028 11:15:32.545951    4264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11176.pem
	I1028 11:15:32.561688    4264 command_runner.go:130] > 51391683
	I1028 11:15:32.572318    4264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11176.pem /etc/ssl/certs/51391683.0"
	I1028 11:15:32.606398    4264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:15:32.619667    4264 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:15:32.619734    4264 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1028 11:15:32.619734    4264 command_runner.go:130] > Device: 830h/2096d	Inode: 18738       Links: 1
	I1028 11:15:32.619734    4264 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 11:15:32.619734    4264 command_runner.go:130] > Access: 2024-10-28 11:14:21.119199710 +0000
	I1028 11:15:32.619734    4264 command_runner.go:130] > Modify: 2024-10-28 11:14:21.119199710 +0000
	I1028 11:15:32.619804    4264 command_runner.go:130] > Change: 2024-10-28 11:14:21.119199710 +0000
	I1028 11:15:32.619804    4264 command_runner.go:130] >  Birth: 2024-10-28 11:14:21.119199710 +0000
	I1028 11:15:32.630836    4264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 11:15:32.646714    4264 command_runner.go:130] > Certificate will not expire
	I1028 11:15:32.657177    4264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 11:15:32.672600    4264 command_runner.go:130] > Certificate will not expire
	I1028 11:15:32.684018    4264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 11:15:32.698441    4264 command_runner.go:130] > Certificate will not expire
	I1028 11:15:32.708786    4264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 11:15:32.724624    4264 command_runner.go:130] > Certificate will not expire
	I1028 11:15:32.736368    4264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 11:15:32.759769    4264 command_runner.go:130] > Certificate will not expire
	I1028 11:15:32.770281    4264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 11:15:32.787807    4264 command_runner.go:130] > Certificate will not expire
	I1028 11:15:32.788828    4264 kubeadm.go:392] StartCluster: {Name:functional-928900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-928900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:15:32.800202    4264 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 11:15:32.856100    4264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:15:32.875139    4264 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1028 11:15:32.875207    4264 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1028 11:15:32.875207    4264 command_runner.go:130] > /var/lib/minikube/etcd:
	I1028 11:15:32.875207    4264 command_runner.go:130] > member
	I1028 11:15:32.875207    4264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 11:15:32.875400    4264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 11:15:32.886203    4264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 11:15:32.907211    4264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 11:15:32.916110    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:32.994247    4264 kubeconfig.go:125] found "functional-928900" server: "https://127.0.0.1:59551"
	I1028 11:15:32.995572    4264 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 11:15:32.996433    4264 kapi.go:59] client config for functional-928900: &rest.Config{Host:"https://127.0.0.1:59551", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22eb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:15:32.996937    4264 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:15:33.007013    4264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 11:15:33.029948    4264 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I1028 11:15:33.030077    4264 kubeadm.go:597] duration metric: took 154.6751ms to restartPrimaryControlPlane
	I1028 11:15:33.030077    4264 kubeadm.go:394] duration metric: took 241.2454ms to StartCluster
	I1028 11:15:33.030168    4264 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:15:33.030304    4264 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 11:15:33.031475    4264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:15:33.032663    4264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 11:15:33.032734    4264 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:15:33.033039    4264 addons.go:69] Setting storage-provisioner=true in profile "functional-928900"
	I1028 11:15:33.033039    4264 addons.go:69] Setting default-storageclass=true in profile "functional-928900"
	I1028 11:15:33.033147    4264 addons.go:234] Setting addon storage-provisioner=true in "functional-928900"
	W1028 11:15:33.033188    4264 addons.go:243] addon storage-provisioner should already be in state true
	I1028 11:15:33.033188    4264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-928900"
	I1028 11:15:33.033352    4264 host.go:66] Checking if "functional-928900" exists ...
	I1028 11:15:33.033352    4264 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:15:33.035836    4264 out.go:177] * Verifying Kubernetes components...
	I1028 11:15:33.050360    4264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:15:33.053284    4264 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
	I1028 11:15:33.054153    4264 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
	I1028 11:15:33.120162    4264 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 11:15:33.121144    4264 kapi.go:59] client config for functional-928900: &rest.Config{Host:"https://127.0.0.1:59551", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22eb3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:15:33.121144    4264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:15:33.121144    4264 addons.go:234] Setting addon default-storageclass=true in "functional-928900"
	W1028 11:15:33.121144    4264 addons.go:243] addon default-storageclass should already be in state true
	I1028 11:15:33.121144    4264 host.go:66] Checking if "functional-928900" exists ...
	I1028 11:15:33.123170    4264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:15:33.123170    4264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:15:33.135145    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:33.143144    4264 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
	I1028 11:15:33.200146    4264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
	I1028 11:15:33.200146    4264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:15:33.200146    4264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:15:33.209128    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:33.226593    4264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:15:33.273718    4264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-928900
	I1028 11:15:33.297013    4264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
	I1028 11:15:33.343164    4264 node_ready.go:35] waiting up to 6m0s for node "functional-928900" to be "Ready" ...
	I1028 11:15:33.344172    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:33.344172    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:33.344172    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:33.344172    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:33.347183    4264 round_trippers.go:574] Response Status:  in 3 milliseconds
	I1028 11:15:33.347183    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:33.366622    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:15:33.460271    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:15:33.484228    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:33.488014    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:33.488014    4264 retry.go:31] will retry after 162.019005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:33.565193    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:33.572355    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:33.572355    4264 retry.go:31] will retry after 193.504638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:33.661258    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:15:33.757697    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:33.764818    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:33.764818    4264 retry.go:31] will retry after 304.915787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:33.778741    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:15:33.877764    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:33.885209    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:33.885286    4264 retry.go:31] will retry after 285.975294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:34.081944    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:15:34.174614    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:34.178101    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:34.178101    4264 retry.go:31] will retry after 790.907551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:34.181605    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:15:34.274535    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:34.281047    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:34.281047    4264 retry.go:31] will retry after 690.168001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:34.347487    4264 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:34.347487    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:34.347487    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:34.347487    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:34.347487    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:34.350491    4264 round_trippers.go:574] Response Status:  in 3 milliseconds
	I1028 11:15:34.350491    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:34.981746    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:15:34.983314    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:15:35.077711    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:35.083645    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:35.083645    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:35.084175    4264 retry.go:31] will retry after 486.897077ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:35.084175    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:35.084269    4264 retry.go:31] will retry after 701.184208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:35.350876    4264 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:35.350876    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:35.351320    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:35.351320    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:35.351320    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:35.356259    4264 round_trippers.go:574] Response Status:  in 4 milliseconds
	I1028 11:15:35.356259    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:35.582328    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:15:35.672329    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:35.676632    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:35.677187    4264 retry.go:31] will retry after 1.601984481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:35.796961    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:15:35.898788    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:35.905849    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:35.905849    4264 retry.go:31] will retry after 1.694948438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:36.356755    4264 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:36.356755    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:36.356755    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:36.356755    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:36.356755    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:36.361341    4264 round_trippers.go:574] Response Status:  in 4 milliseconds
	I1028 11:15:36.361341    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:37.291995    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:15:37.361918    4264 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:37.361918    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:37.361918    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:37.361918    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:37.361918    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:37.365503    4264 round_trippers.go:574] Response Status:  in 3 milliseconds
	I1028 11:15:37.365503    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:37.613588    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:15:37.933187    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:38.016023    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:38.016059    4264 retry.go:31] will retry after 1.75824906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:38.226985    4264 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1028 11:15:38.227180    4264 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:38.227180    4264 retry.go:31] will retry after 1.147801038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1028 11:15:38.369266    4264 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:38.369480    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:38.369480    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:38.369480    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:38.369480    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:38.372374    4264 round_trippers.go:574] Response Status:  in 2 milliseconds
	I1028 11:15:38.372374    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:39.373179    4264 with_retry.go:234] Got a Retry-After 1s response for attempt 6 to https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:39.373337    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:39.373669    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:39.373669    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:39.373669    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:39.377902    4264 round_trippers.go:574] Response Status:  in 4 milliseconds
	I1028 11:15:39.377902    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:39.389183    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:15:39.786911    4264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:15:40.379025    4264 with_retry.go:234] Got a Retry-After 1s response for attempt 7 to https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:40.379025    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:40.379025    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:40.379025    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:40.379025    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:43.924123    4264 round_trippers.go:574] Response Status: 200 OK in 3545 milliseconds
	I1028 11:15:43.924123    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:43.924123    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:43.924269    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:43.924269    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:43.924306    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:43 GMT
	I1028 11:15:43.924350    4264 round_trippers.go:580]     Audit-Id: bddefab9-a036-44a6-a8d4-0990aa956ff3
	I1028 11:15:43.924350    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:43.924733    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:43.925782    4264 node_ready.go:49] node "functional-928900" has status "Ready":"True"
	I1028 11:15:43.925842    4264 node_ready.go:38] duration metric: took 10.5825329s for node "functional-928900" to be "Ready" ...
	I1028 11:15:43.925908    4264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:15:43.926081    4264 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:15:43.926194    4264 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:15:43.926247    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods
	I1028 11:15:43.926351    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:43.926399    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:43.926399    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.222852    4264 round_trippers.go:574] Response Status: 200 OK in 296 milliseconds
	I1028 11:15:44.222960    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.223014    4264 round_trippers.go:580]     Audit-Id: cc9a1ad2-9604-4e8b-8515-22e30152f1ee
	I1028 11:15:44.223014    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.223112    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.223208    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.223208    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.223208    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.224349    4264 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9j7zg","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"b456d1cc-467f-4b3d-a619-bc9f17258666","resourceVersion":"424","creationTimestamp":"2024-10-28T11:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7533fda5-404a-4712-bcb1-46f70a06c53e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7533fda5-404a-4712-bcb1-46f70a06c53e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51831 chars]
	I1028 11:15:44.230946    4264 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9j7zg" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:44.230946    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9j7zg
	I1028 11:15:44.230946    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.230946    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.230946    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.256120    4264 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1028 11:15:44.256171    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.256229    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.256229    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.256229    4264 round_trippers.go:580]     Audit-Id: c24f1795-3236-4b36-a777-a2547959db64
	I1028 11:15:44.256229    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.256229    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.256229    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.256395    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-9j7zg","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"b456d1cc-467f-4b3d-a619-bc9f17258666","resourceVersion":"424","creationTimestamp":"2024-10-28T11:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7533fda5-404a-4712-bcb1-46f70a06c53e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7533fda5-404a-4712-bcb1-46f70a06c53e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6495 chars]
	I1028 11:15:44.257409    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:44.257409    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.257471    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.257471    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.317090    4264 round_trippers.go:574] Response Status: 200 OK in 58 milliseconds
	I1028 11:15:44.317137    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.317137    4264 round_trippers.go:580]     Audit-Id: 80aab36d-c0d0-4f7f-880c-2bc877e4de52
	I1028 11:15:44.317137    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.317318    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.317356    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.317356    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.317356    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.317614    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:44.320669    4264 pod_ready.go:93] pod "coredns-7c65d6cfc9-9j7zg" in "kube-system" namespace has status "Ready":"True"
	I1028 11:15:44.320669    4264 pod_ready.go:82] duration metric: took 89.7217ms for pod "coredns-7c65d6cfc9-9j7zg" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:44.320669    4264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-928900" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:44.321266    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/etcd-functional-928900
	I1028 11:15:44.321266    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.321266    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.321266    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.329930    4264 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:15:44.330067    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.330067    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.330117    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.330117    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.330117    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.330117    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.330117    4264 round_trippers.go:580]     Audit-Id: a7388ca2-3870-43aa-a4b1-90e9adb5960a
	I1028 11:15:44.330117    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-928900","namespace":"kube-system","uid":"215e6379-1e1b-4c22-825f-3f69322a34de","resourceVersion":"280","creationTimestamp":"2024-10-28T11:14:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"1aabb813122244e9ea32e3595201372c","kubernetes.io/config.mirror":"1aabb813122244e9ea32e3595201372c","kubernetes.io/config.seen":"2024-10-28T11:14:25.343551628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6459 chars]
	I1028 11:15:44.331637    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:44.331637    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.331637    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.331637    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.431148    4264 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I1028 11:15:44.432147    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.432147    4264 round_trippers.go:580]     Audit-Id: bfc5dcc3-45d8-40cd-adb6-dfd2512a8a0e
	I1028 11:15:44.432147    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.432147    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.432147    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.432147    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.432147    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.432147    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:44.432147    4264 pod_ready.go:93] pod "etcd-functional-928900" in "kube-system" namespace has status "Ready":"True"
	I1028 11:15:44.432147    4264 pod_ready.go:82] duration metric: took 111.4765ms for pod "etcd-functional-928900" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:44.432147    4264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-928900" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:44.432147    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:44.432147    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.432147    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.432147    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.440475    4264 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:15:44.440475    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.440475    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.440475    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.440475    4264 round_trippers.go:580]     Audit-Id: 782adee2-dabb-4c7d-aa6b-9a7aca3b11e0
	I1028 11:15:44.440475    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.440475    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.440475    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.441466    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"438","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8941 chars]
	I1028 11:15:44.442967    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:44.443117    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.443156    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.443156    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.448047    4264 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:15:44.448047    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.448047    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.448047    4264 round_trippers.go:580]     Audit-Id: 310c1780-000b-482d-afd7-a577617dbc4d
	I1028 11:15:44.448047    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.448047    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.448047    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.448047    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.448047    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:44.525102    4264 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1028 11:15:44.525102    4264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.1358474s)
	I1028 11:15:44.525717    4264 round_trippers.go:463] GET https://127.0.0.1:59551/apis/storage.k8s.io/v1/storageclasses
	I1028 11:15:44.525798    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.525866    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.525866    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.547331    4264 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1028 11:15:44.547418    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.547418    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.547418    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.547418    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.547512    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.547512    4264 round_trippers.go:580]     Content-Length: 1273
	I1028 11:15:44.547512    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.547512    4264 round_trippers.go:580]     Audit-Id: f51dbd3f-2e79-4897-84e6-b3dd02363e3f
	I1028 11:15:44.547629    4264 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"standard","uid":"bc4fadfa-83de-4cdc-a523-edd90e8104ac","resourceVersion":"339","creationTimestamp":"2024-10-28T11:14:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T11:14:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1028 11:15:44.548261    4264 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"bc4fadfa-83de-4cdc-a523-edd90e8104ac","resourceVersion":"339","creationTimestamp":"2024-10-28T11:14:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T11:14:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1028 11:15:44.548415    4264 round_trippers.go:463] PUT https://127.0.0.1:59551/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:15:44.548415    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.548415    4264 round_trippers.go:473]     Content-Type: application/json
	I1028 11:15:44.548496    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:44.548496    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.619254    4264 round_trippers.go:574] Response Status: 200 OK in 70 milliseconds
	I1028 11:15:44.619254    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:44.619254    4264 round_trippers.go:580]     Audit-Id: e6e082db-9d80-4f09-aae0-c95d5b59f5e1
	I1028 11:15:44.619254    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:44.619254    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:44.619254    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:44.619254    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:44.619386    4264 round_trippers.go:580]     Content-Length: 1220
	I1028 11:15:44.619386    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:44 GMT
	I1028 11:15:44.619630    4264 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"bc4fadfa-83de-4cdc-a523-edd90e8104ac","resourceVersion":"339","creationTimestamp":"2024-10-28T11:14:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-28T11:14:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1028 11:15:44.932889    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:44.932889    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:44.932889    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:44.932889    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:45.017228    4264 round_trippers.go:574] Response Status: 200 OK in 84 milliseconds
	I1028 11:15:45.017228    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:45.017228    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:45.017228    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:45.017228    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:45.017228    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:45.017228    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:45 GMT
	I1028 11:15:45.017374    4264 round_trippers.go:580]     Audit-Id: 3c2dd726-8954-4225-8b25-fd635c91f3c6
	I1028 11:15:45.017755    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"438","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8941 chars]
	I1028 11:15:45.018786    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:45.018920    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:45.018920    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:45.019032    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:45.024970    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:45.025540    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:45.025540    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:45 GMT
	I1028 11:15:45.025540    4264 round_trippers.go:580]     Audit-Id: 16b3d8c3-086b-4d5b-b819-120604f864d5
	I1028 11:15:45.025540    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:45.025540    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:45.025644    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:45.025644    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:45.025818    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:45.432372    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:45.432372    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:45.432372    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:45.432372    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:45.438629    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:45.438629    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:45.438629    4264 round_trippers.go:580]     Audit-Id: e8d6cad0-88d1-4f4a-9ae4-3303d68cb682
	I1028 11:15:45.438629    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:45.438629    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:45.438629    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:45.438629    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:45.438629    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:45 GMT
	I1028 11:15:45.439394    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:45.439394    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:45.439394    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:45.439394    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:45.439394    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:45.446750    4264 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:15:45.446809    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:45.446809    4264 round_trippers.go:580]     Audit-Id: 1c005b73-ed0a-4e09-abe6-f7884ab41f63
	I1028 11:15:45.446884    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:45.446884    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:45.446884    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:45.446884    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:45.446942    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:45 GMT
	I1028 11:15:45.447265    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:45.933176    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:45.933238    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:45.933238    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:45.933238    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:45.937802    4264 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:15:45.937802    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:45.937802    4264 round_trippers.go:580]     Audit-Id: 11f5c842-6013-40cc-84d0-9a507098700d
	I1028 11:15:45.937802    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:45.937802    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:45.937802    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:45.937802    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:45.937802    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:45 GMT
	I1028 11:15:45.937802    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:45.939101    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:45.939123    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:45.939123    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:45.939123    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:45.950921    4264 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:15:45.950921    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:45.950921    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:45.950921    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:45.950921    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:45.950921    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:45.950921    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:45 GMT
	I1028 11:15:45.950921    4264 round_trippers.go:580]     Audit-Id: f815e781-a1fe-4264-a1f7-ced4f8ff9aca
	I1028 11:15:45.950921    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:46.118869    4264 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1028 11:15:46.118869    4264 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1028 11:15:46.118869    4264 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1028 11:15:46.118869    4264 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1028 11:15:46.118869    4264 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1028 11:15:46.118869    4264 command_runner.go:130] > pod/storage-provisioner configured
	I1028 11:15:46.119219    4264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.3321526s)
	I1028 11:15:46.125174    4264 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 11:15:46.128310    4264 addons.go:510] duration metric: took 13.0954671s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 11:15:46.432343    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:46.432343    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:46.432343    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:46.432343    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:46.438475    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:46.438520    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:46.438520    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:46.438520    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:46.438520    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:46.438520    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:46 GMT
	I1028 11:15:46.438520    4264 round_trippers.go:580]     Audit-Id: f11f8761-f407-4dad-8595-3f7477ae45d6
	I1028 11:15:46.438520    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:46.438837    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:46.439666    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:46.439666    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:46.439666    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:46.439666    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:46.445911    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:46.445911    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:46.445911    4264 round_trippers.go:580]     Audit-Id: d2786ce4-9370-49e3-a07c-e8a6e2fcea4b
	I1028 11:15:46.445911    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:46.445911    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:46.445911    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:46.445911    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:46.445911    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:46 GMT
	I1028 11:15:46.445911    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:46.446654    4264 pod_ready.go:103] pod "kube-apiserver-functional-928900" in "kube-system" namespace has status "Ready":"False"
	I1028 11:15:46.932586    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:46.932586    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:46.932586    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:46.932586    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:46.938166    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:46.938166    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:46.938255    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:46.938255    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:46.938255    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:46.938255    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:46.938255    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:46 GMT
	I1028 11:15:46.938255    4264 round_trippers.go:580]     Audit-Id: 7f5c466e-3d0e-41aa-97a3-43e67b6f7eb7
	I1028 11:15:46.938434    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:46.939106    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:46.939106    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:46.939106    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:46.939203    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:46.945729    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:46.945729    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:46.945729    4264 round_trippers.go:580]     Audit-Id: ce4bc346-94fc-42dc-905d-251252ec23c7
	I1028 11:15:46.945729    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:46.945729    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:46.945729    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:46.945729    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:46.945729    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:46 GMT
	I1028 11:15:46.946628    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:47.432855    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:47.432946    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:47.433004    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:47.433004    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:47.438155    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:47.438155    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:47.438155    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:47.438155    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:47 GMT
	I1028 11:15:47.438155    4264 round_trippers.go:580]     Audit-Id: 89ba2cbf-f4ab-4996-ac4c-e2582eb66107
	I1028 11:15:47.438155    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:47.438155    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:47.438155    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:47.438155    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:47.438899    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:47.438899    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:47.438899    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:47.438899    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:47.445312    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:47.445312    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:47.445312    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:47.445312    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:47.445312    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:47.445312    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:47.445312    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:47 GMT
	I1028 11:15:47.445312    4264 round_trippers.go:580]     Audit-Id: c20fecf3-369e-4e4b-a82d-362b48c08bf3
	I1028 11:15:47.445312    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:47.933781    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:47.933781    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:47.933781    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:47.933781    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:47.939452    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:47.939564    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:47.939564    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:47 GMT
	I1028 11:15:47.939564    4264 round_trippers.go:580]     Audit-Id: 5ef1f16b-7977-4cfd-84a9-ad6f64b1a024
	I1028 11:15:47.939564    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:47.939564    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:47.939564    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:47.939564    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:47.939564    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:47.940502    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:47.940502    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:47.940502    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:47.940502    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:47.945550    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:47.946090    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:47.946090    4264 round_trippers.go:580]     Audit-Id: d85a8d11-d004-4d0c-abab-f0e1d8a05a8a
	I1028 11:15:47.946090    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:47.946090    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:47.946090    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:47.946090    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:47.946090    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:47 GMT
	I1028 11:15:47.946247    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:48.433478    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:48.433478    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:48.433478    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:48.433478    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:48.439094    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:48.439094    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:48.439094    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:48 GMT
	I1028 11:15:48.439094    4264 round_trippers.go:580]     Audit-Id: c14493b5-cb0c-47d2-be45-9400af97ec65
	I1028 11:15:48.439094    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:48.439198    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:48.439198    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:48.439198    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:48.439452    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:48.440102    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:48.440102    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:48.440149    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:48.440149    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:48.445399    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:48.445399    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:48.445399    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:48.445399    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:48 GMT
	I1028 11:15:48.445399    4264 round_trippers.go:580]     Audit-Id: ae0cec96-84c7-4ee4-8007-b2958e90dd80
	I1028 11:15:48.445399    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:48.445399    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:48.445399    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:48.445399    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:48.933116    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:48.933116    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:48.933116    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:48.933116    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:48.939539    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:48.939539    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:48.939539    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:48 GMT
	I1028 11:15:48.939539    4264 round_trippers.go:580]     Audit-Id: 4755fdf1-fbdd-412c-b1d8-b7b52c21f894
	I1028 11:15:48.939539    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:48.939539    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:48.939539    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:48.939539    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:48.940129    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:48.940890    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:48.940890    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:48.940890    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:48.940890    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:48.947609    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:48.947609    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:48.947609    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:48.947609    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:48 GMT
	I1028 11:15:48.947609    4264 round_trippers.go:580]     Audit-Id: 2104ecf3-1bc0-4590-9f19-4916cebc76dd
	I1028 11:15:48.947609    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:48.947609    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:48.947609    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:48.948204    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:48.948400    4264 pod_ready.go:103] pod "kube-apiserver-functional-928900" in "kube-system" namespace has status "Ready":"False"
	I1028 11:15:49.432546    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:49.432546    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:49.432546    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:49.432546    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:49.438501    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:49.438501    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:49.438501    4264 round_trippers.go:580]     Audit-Id: ee27993d-e944-420a-9631-683bac6ebdfd
	I1028 11:15:49.438501    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:49.438501    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:49.438501    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:49.438501    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:49.438501    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:49 GMT
	I1028 11:15:49.439336    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:49.439926    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:49.439926    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:49.439926    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:49.439926    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:49.445632    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:49.446170    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:49.446170    4264 round_trippers.go:580]     Audit-Id: 3291e822-948f-42ff-8246-560264fdc8fe
	I1028 11:15:49.446170    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:49.446170    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:49.446170    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:49.446170    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:49.446263    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:49 GMT
	I1028 11:15:49.446327    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:49.932789    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:49.932789    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:49.932789    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:49.932789    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:49.939169    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:49.939169    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:49.939169    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:49.939169    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:49 GMT
	I1028 11:15:49.939169    4264 round_trippers.go:580]     Audit-Id: 1d7ba345-5851-4d5d-af4d-6ea0361fceb1
	I1028 11:15:49.939169    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:49.939169    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:49.939169    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:49.939169    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:49.940261    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:49.940261    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:49.940261    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:49.940261    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:49.946979    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:49.947017    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:49.947017    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:49 GMT
	I1028 11:15:49.947017    4264 round_trippers.go:580]     Audit-Id: c109631e-36cd-47c0-b35e-58cfeb88d8e8
	I1028 11:15:49.947017    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:49.947017    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:49.947065    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:49.947065    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:49.947201    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:50.432333    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:50.432333    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:50.432333    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:50.432333    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:50.438320    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:50.438320    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:50.438320    4264 round_trippers.go:580]     Audit-Id: a7b21f8c-70ee-415a-ae62-d074c5409670
	I1028 11:15:50.438320    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:50.438320    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:50.438320    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:50.438320    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:50.438856    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:50 GMT
	I1028 11:15:50.439049    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:50.439374    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:50.439374    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:50.439374    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:50.439374    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:50.445405    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:50.445448    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:50.445448    4264 round_trippers.go:580]     Audit-Id: 7d5bbb71-d43e-4548-8c21-85f641988bf9
	I1028 11:15:50.445448    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:50.445448    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:50.445448    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:50.445448    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:50.445448    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:50 GMT
	I1028 11:15:50.445988    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:50.933643    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:50.933722    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:50.933722    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:50.933722    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:50.939471    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:50.939471    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:50.939471    4264 round_trippers.go:580]     Audit-Id: 41afb557-f90a-4aea-aa52-dd4a9f521370
	I1028 11:15:50.939471    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:50.939471    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:50.939471    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:50.939471    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:50.939471    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:50 GMT
	I1028 11:15:50.939471    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:50.940660    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:50.940660    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:50.940660    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:50.940660    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:50.946855    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:50.946855    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:50.946855    4264 round_trippers.go:580]     Audit-Id: 5a0a20d2-c466-4208-9ad4-0642fb3fc5ad
	I1028 11:15:50.946855    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:50.946855    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:50.946855    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:50.946855    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:50.946855    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:50 GMT
	I1028 11:15:50.946855    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:51.432573    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:51.432573    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:51.432573    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:51.432573    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:51.438642    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:51.438665    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:51.438732    4264 round_trippers.go:580]     Audit-Id: 7922aa95-1d75-4969-a18f-c18749d3e7f6
	I1028 11:15:51.438757    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:51.438778    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:51.438778    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:51.438814    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:51.438814    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:51 GMT
	I1028 11:15:51.439220    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:51.439343    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:51.439910    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:51.439910    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:51.439910    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:51.446042    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:51.446042    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:51.446042    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:51.446042    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:51 GMT
	I1028 11:15:51.446042    4264 round_trippers.go:580]     Audit-Id: ac06f031-f1a4-4548-b29d-95d2d3bd65e1
	I1028 11:15:51.446042    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:51.446042    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:51.446042    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:51.446042    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:51.446894    4264 pod_ready.go:103] pod "kube-apiserver-functional-928900" in "kube-system" namespace has status "Ready":"False"
	I1028 11:15:51.933176    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:51.933176    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:51.933176    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:51.933176    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:51.939099    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:51.939125    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:51.939125    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:51 GMT
	I1028 11:15:51.939125    4264 round_trippers.go:580]     Audit-Id: 3be5b181-9d25-43f6-8ed7-177663b9c962
	I1028 11:15:51.939244    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:51.939273    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:51.939313    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:51.939313    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:51.939550    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:51.939773    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:51.939773    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:51.939773    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:51.939773    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:51.946112    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:51.946112    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:51.946112    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:51.946112    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:51.946112    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:51.946112    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:51.946112    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:51 GMT
	I1028 11:15:51.946659    4264 round_trippers.go:580]     Audit-Id: f84d1311-7296-41d7-9025-4f6d091b3180
	I1028 11:15:51.946934    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:52.433027    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:52.433027    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:52.433027    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:52.433027    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:52.438638    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:52.438638    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:52.438638    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:52.438638    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:52 GMT
	I1028 11:15:52.438638    4264 round_trippers.go:580]     Audit-Id: befed544-6e63-45b0-9da2-d958ae1cde83
	I1028 11:15:52.438638    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:52.438638    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:52.438638    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:52.439527    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:52.440140    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:52.440140    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:52.440140    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:52.440140    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:52.446955    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:52.446955    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:52.446955    4264 round_trippers.go:580]     Audit-Id: bd6bc167-5467-44b6-80bc-df89363eae78
	I1028 11:15:52.447054    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:52.447054    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:52.447054    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:52.447054    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:52.447054    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:52 GMT
	I1028 11:15:52.447181    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:52.932781    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:52.932940    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:52.932940    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:52.932940    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:52.939950    4264 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:15:52.939950    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:52.939950    4264 round_trippers.go:580]     Audit-Id: 883bd353-f710-4fb6-9205-e7304c758769
	I1028 11:15:52.939950    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:52.939950    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:52.939950    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:52.939950    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:52.939950    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:52 GMT
	I1028 11:15:52.939950    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:52.941106    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:52.941106    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:52.941106    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:52.941106    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:52.946691    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:52.947222    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:52.947506    4264 round_trippers.go:580]     Audit-Id: 0932f7d5-b560-467e-a8eb-adab27827963
	I1028 11:15:52.947506    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:52.947506    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:52.947506    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:52.947506    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:52.947506    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:52 GMT
	I1028 11:15:52.947506    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:53.432633    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:53.432633    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:53.432633    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:53.432633    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:53.438348    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:53.438348    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:53.438413    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:53 GMT
	I1028 11:15:53.438442    4264 round_trippers.go:580]     Audit-Id: 4a0a334b-d917-4518-b083-1c97627b91f0
	I1028 11:15:53.438442    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:53.438478    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:53.438478    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:53.438478    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:53.438644    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:53.439453    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:53.439453    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:53.439453    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:53.439453    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:53.445596    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:53.445596    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:53.445596    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:53.445596    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:53.445596    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:53.445596    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:53 GMT
	I1028 11:15:53.445596    4264 round_trippers.go:580]     Audit-Id: 976cadbe-f838-4de8-8150-2df890e69ce5
	I1028 11:15:53.445596    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:53.446301    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:53.932519    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:53.932519    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:53.932519    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:53.932519    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:53.939118    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:53.939118    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:53.939202    4264 round_trippers.go:580]     Audit-Id: f5551d84-bf42-493b-9bce-6cc16d37d579
	I1028 11:15:53.939227    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:53.939227    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:53.939227    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:53.939227    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:53.939227    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:53 GMT
	I1028 11:15:53.939569    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"466","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I1028 11:15:53.939656    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:53.940229    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:53.940229    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:53.940229    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:53.946908    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:53.946908    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:53.946908    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:53.946908    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:53.946908    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:53 GMT
	I1028 11:15:53.946908    4264 round_trippers.go:580]     Audit-Id: 2fc77f6d-be21-452a-9db3-1d8cd70f8261
	I1028 11:15:53.946908    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:53.946908    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:53.946908    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:53.947606    4264 pod_ready.go:103] pod "kube-apiserver-functional-928900" in "kube-system" namespace has status "Ready":"False"
	I1028 11:15:54.433169    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900
	I1028 11:15:54.433169    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:54.433169    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:54.433169    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:54.440754    4264 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:15:54.440754    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:54.440754    4264 round_trippers.go:580]     Audit-Id: 70a6e3e0-41d6-4435-98b0-df72aa0a42c1
	I1028 11:15:54.440754    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:54.440754    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:54.440754    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:54.440754    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:54.440754    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:54 GMT
	I1028 11:15:54.441004    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-928900","namespace":"kube-system","uid":"00df3313-23ae-438c-8c10-438154994614","resourceVersion":"548","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.mirror":"84fd6e349fc38958f545bfe6481372bf","kubernetes.io/config.seen":"2024-10-28T11:14:34.563925801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8742 chars]
	I1028 11:15:54.441713    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:54.441792    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:54.441792    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:54.441792    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:54.447457    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:54.447457    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:54.447457    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:54.447555    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:54.447555    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:54.447643    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:54.447710    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:54 GMT
	I1028 11:15:54.447710    4264 round_trippers.go:580]     Audit-Id: 5e9f1aee-f31c-479c-89f0-e882fd13f0a4
	I1028 11:15:54.447820    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:54.447820    4264 pod_ready.go:93] pod "kube-apiserver-functional-928900" in "kube-system" namespace has status "Ready":"True"
	I1028 11:15:54.447820    4264 pod_ready.go:82] duration metric: took 10.0155337s for pod "kube-apiserver-functional-928900" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:54.448352    4264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-928900" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:54.448425    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-928900
	I1028 11:15:54.448470    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:54.448506    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:54.448506    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:54.451817    4264 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:15:54.451817    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:54.451817    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:54 GMT
	I1028 11:15:54.451817    4264 round_trippers.go:580]     Audit-Id: ca3557f0-3ac8-4904-b0d5-2629779e5c8f
	I1028 11:15:54.451817    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:54.451817    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:54.451817    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:54.451817    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:54.452808    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-928900","namespace":"kube-system","uid":"63481348-7064-474f-801d-90199e29226b","resourceVersion":"469","creationTimestamp":"2024-10-28T11:14:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c337e0ba137a0a3170a9fff62af5dd02","kubernetes.io/config.mirror":"c337e0ba137a0a3170a9fff62af5dd02","kubernetes.io/config.seen":"2024-10-28T11:14:25.343558929Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I1028 11:15:54.453511    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:54.453579    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:54.453579    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:54.453579    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:54.457185    4264 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:15:54.457185    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:54.457185    4264 round_trippers.go:580]     Audit-Id: 3218f01f-f909-4963-8df3-36da7c1dbc98
	I1028 11:15:54.457185    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:54.457185    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:54.457185    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:54.457185    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:54.457185    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:54 GMT
	I1028 11:15:54.457185    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:54.949099    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-928900
	I1028 11:15:54.949099    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:54.949099    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:54.949099    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:54.956542    4264 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:15:54.956635    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:54.956635    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:54 GMT
	I1028 11:15:54.956635    4264 round_trippers.go:580]     Audit-Id: f614297b-cd88-4979-b54a-95f26a13df86
	I1028 11:15:54.956635    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:54.956635    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:54.956635    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:54.956733    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:54.957127    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-928900","namespace":"kube-system","uid":"63481348-7064-474f-801d-90199e29226b","resourceVersion":"469","creationTimestamp":"2024-10-28T11:14:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c337e0ba137a0a3170a9fff62af5dd02","kubernetes.io/config.mirror":"c337e0ba137a0a3170a9fff62af5dd02","kubernetes.io/config.seen":"2024-10-28T11:14:25.343558929Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I1028 11:15:54.957832    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:54.957832    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:54.957832    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:54.957832    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:54.964306    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:54.964306    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:54.964306    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:54.964306    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:54.964306    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:54.964306    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:54 GMT
	I1028 11:15:54.964306    4264 round_trippers.go:580]     Audit-Id: fff51347-7b26-43c1-87ba-a12bbe04e9a8
	I1028 11:15:54.964306    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:54.964306    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:55.448405    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-928900
	I1028 11:15:55.448405    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:55.448405    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:55.448405    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:55.461860    4264 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1028 11:15:55.461963    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:55.461963    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:55.461963    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:55.461963    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:55.461963    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:55.461963    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:55 GMT
	I1028 11:15:55.461963    4264 round_trippers.go:580]     Audit-Id: fe37501b-bfb6-4026-ad3c-2be999338573
	I1028 11:15:55.462714    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-928900","namespace":"kube-system","uid":"63481348-7064-474f-801d-90199e29226b","resourceVersion":"469","creationTimestamp":"2024-10-28T11:14:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c337e0ba137a0a3170a9fff62af5dd02","kubernetes.io/config.mirror":"c337e0ba137a0a3170a9fff62af5dd02","kubernetes.io/config.seen":"2024-10-28T11:14:25.343558929Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I1028 11:15:55.463728    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:55.463921    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:55.463921    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:55.463921    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:55.469079    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:55.469183    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:55.469183    4264 round_trippers.go:580]     Audit-Id: e0216ce4-20f7-46e2-823b-735df72ec0a4
	I1028 11:15:55.469228    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:55.469228    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:55.469228    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:55.469228    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:55.469228    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:55 GMT
	I1028 11:15:55.470005    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:55.949641    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-928900
	I1028 11:15:55.949641    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:55.949641    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:55.949641    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:55.955878    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:55.955878    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:55.955878    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:55.955878    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:55 GMT
	I1028 11:15:55.955878    4264 round_trippers.go:580]     Audit-Id: 7c02628c-3fb5-4a7e-a509-6d9c070d5483
	I1028 11:15:55.955878    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:55.955878    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:55.955878    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:55.955878    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-928900","namespace":"kube-system","uid":"63481348-7064-474f-801d-90199e29226b","resourceVersion":"551","creationTimestamp":"2024-10-28T11:14:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c337e0ba137a0a3170a9fff62af5dd02","kubernetes.io/config.mirror":"c337e0ba137a0a3170a9fff62af5dd02","kubernetes.io/config.seen":"2024-10-28T11:14:25.343558929Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8315 chars]
	I1028 11:15:55.956756    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:55.956756    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:55.956756    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:55.956756    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:55.962192    4264 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:15:55.962278    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:55.962278    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:55.962321    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:55.962321    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:55.962321    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:55 GMT
	I1028 11:15:55.962321    4264 round_trippers.go:580]     Audit-Id: ac396b60-91fc-4c73-97e9-a1adfbb41eb7
	I1028 11:15:55.962321    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:55.962321    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:55.962960    4264 pod_ready.go:93] pod "kube-controller-manager-functional-928900" in "kube-system" namespace has status "Ready":"True"
	I1028 11:15:55.963010    4264 pod_ready.go:82] duration metric: took 1.5146374s for pod "kube-controller-manager-functional-928900" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:55.963010    4264 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-spzh2" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:55.963010    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-proxy-spzh2
	I1028 11:15:55.963010    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:55.963010    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:55.963010    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:55.967770    4264 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:15:55.967856    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:55.967856    4264 round_trippers.go:580]     Audit-Id: bf7d81f1-1713-49b9-ae0c-d146c06f2433
	I1028 11:15:55.967856    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:55.967891    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:55.967891    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:55.967891    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:55.967891    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:55 GMT
	I1028 11:15:55.967891    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-spzh2","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fd1a5b0-bd4d-488c-8df3-0a73d84993e2","resourceVersion":"472","creationTimestamp":"2024-10-28T11:14:39Z","labels":{"controller-revision-hash":"77987969cc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dbe15c31-a0a6-4c10-8158-0cb36e38764d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dbe15c31-a0a6-4c10-8158-0cb36e38764d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I1028 11:15:55.968620    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:55.968620    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:55.968620    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:55.968620    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:55.973319    4264 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:15:55.973319    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:55.973319    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:55.973383    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:55.973383    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:55 GMT
	I1028 11:15:55.973383    4264 round_trippers.go:580]     Audit-Id: a834cd10-34e4-470a-80a0-ff57153934b2
	I1028 11:15:55.973383    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:55.973383    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:55.973506    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:55.974316    4264 pod_ready.go:93] pod "kube-proxy-spzh2" in "kube-system" namespace has status "Ready":"True"
	I1028 11:15:55.974316    4264 pod_ready.go:82] duration metric: took 11.3053ms for pod "kube-proxy-spzh2" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:55.974316    4264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-928900" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:55.974316    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-928900
	I1028 11:15:55.974316    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:55.974316    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:55.974316    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:55.981050    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:55.981050    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:55.981050    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:55.981050    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:55.981050    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:55 GMT
	I1028 11:15:55.981050    4264 round_trippers.go:580]     Audit-Id: 5a20a51b-939f-43e1-95c2-6fc697908883
	I1028 11:15:55.981050    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:55.981050    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:55.981773    4264 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-928900","namespace":"kube-system","uid":"c11a0763-b378-4f84-b297-434e88feb23a","resourceVersion":"546","creationTimestamp":"2024-10-28T11:14:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"34d1104724ea99f35885ccb979201f57","kubernetes.io/config.mirror":"34d1104724ea99f35885ccb979201f57","kubernetes.io/config.seen":"2024-10-28T11:14:34.563929902Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I1028 11:15:55.981773    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes/functional-928900
	I1028 11:15:55.981773    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:55.981773    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:55.981773    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:55.988718    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:55.989298    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:55.989298    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:55 GMT
	I1028 11:15:55.989298    4264 round_trippers.go:580]     Audit-Id: 6fad6696-8dbe-4de4-9325-dc39c94ccffe
	I1028 11:15:55.989298    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:55.989298    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:55.989298    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:55.989347    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:55.989380    4264 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-28T11:14:30Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I1028 11:15:55.989380    4264 pod_ready.go:93] pod "kube-scheduler-functional-928900" in "kube-system" namespace has status "Ready":"True"
	I1028 11:15:55.990001    4264 pod_ready.go:82] duration metric: took 15.6283ms for pod "kube-scheduler-functional-928900" in "kube-system" namespace to be "Ready" ...
	I1028 11:15:55.990001    4264 pod_ready.go:39] duration metric: took 12.0638622s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:15:55.990083    4264 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:15:56.000826    4264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:15:56.026899    4264 command_runner.go:130] > 5873
	I1028 11:15:56.026899    4264 api_server.go:72] duration metric: took 22.9938475s to wait for apiserver process to appear ...
	I1028 11:15:56.026899    4264 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:15:56.026899    4264 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59551/healthz ...
	I1028 11:15:56.039726    4264 api_server.go:279] https://127.0.0.1:59551/healthz returned 200:
	ok
	I1028 11:15:56.039726    4264 round_trippers.go:463] GET https://127.0.0.1:59551/version
	I1028 11:15:56.039726    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:56.039726    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:56.039726    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:56.042950    4264 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:15:56.042989    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:56.042989    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:56.043036    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:56.043065    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:56.043065    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:56.043065    4264 round_trippers.go:580]     Content-Length: 263
	I1028 11:15:56.043065    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:56 GMT
	I1028 11:15:56.043065    4264 round_trippers.go:580]     Audit-Id: f7862ffb-ba5e-4c85-b1a1-b21c1765bd34
	I1028 11:15:56.043065    4264 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.2",
	  "gitCommit": "5864a4677267e6adeae276ad85882a8714d69d9d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-10-22T20:28:14Z",
	  "goVersion": "go1.22.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1028 11:15:56.043065    4264 api_server.go:141] control plane version: v1.31.2
	I1028 11:15:56.043065    4264 api_server.go:131] duration metric: took 16.1658ms to wait for apiserver health ...
	I1028 11:15:56.043065    4264 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:15:56.043065    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods
	I1028 11:15:56.043065    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:56.043065    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:56.043065    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:56.050519    4264 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:15:56.050655    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:56.050683    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:56.050683    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:56.050683    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:56.050683    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:56 GMT
	I1028 11:15:56.050683    4264 round_trippers.go:580]     Audit-Id: 2aea28f4-7488-41f5-984a-e1612c8fa8b2
	I1028 11:15:56.050683    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:56.053928    4264 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"551"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9j7zg","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"b456d1cc-467f-4b3d-a619-bc9f17258666","resourceVersion":"540","creationTimestamp":"2024-10-28T11:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7533fda5-404a-4712-bcb1-46f70a06c53e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7533fda5-404a-4712-bcb1-46f70a06c53e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53488 chars]
	I1028 11:15:56.057439    4264 system_pods.go:59] 7 kube-system pods found
	I1028 11:15:56.057439    4264 system_pods.go:61] "coredns-7c65d6cfc9-9j7zg" [b456d1cc-467f-4b3d-a619-bc9f17258666] Running
	I1028 11:15:56.057439    4264 system_pods.go:61] "etcd-functional-928900" [215e6379-1e1b-4c22-825f-3f69322a34de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 11:15:56.057439    4264 system_pods.go:61] "kube-apiserver-functional-928900" [00df3313-23ae-438c-8c10-438154994614] Running
	I1028 11:15:56.057439    4264 system_pods.go:61] "kube-controller-manager-functional-928900" [63481348-7064-474f-801d-90199e29226b] Running
	I1028 11:15:56.057439    4264 system_pods.go:61] "kube-proxy-spzh2" [3fd1a5b0-bd4d-488c-8df3-0a73d84993e2] Running
	I1028 11:15:56.057439    4264 system_pods.go:61] "kube-scheduler-functional-928900" [c11a0763-b378-4f84-b297-434e88feb23a] Running
	I1028 11:15:56.057439    4264 system_pods.go:61] "storage-provisioner" [b9804189-e6ae-4e66-9f4c-ec6a9431e6b3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 11:15:56.057439    4264 system_pods.go:74] duration metric: took 14.373ms to wait for pod list to return data ...
	I1028 11:15:56.057439    4264 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:15:56.057439    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/default/serviceaccounts
	I1028 11:15:56.057439    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:56.057439    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:56.057439    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:56.063462    4264 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:15:56.063462    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:56.063462    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:56.063462    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:56.063462    4264 round_trippers.go:580]     Content-Length: 261
	I1028 11:15:56.063462    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:56 GMT
	I1028 11:15:56.063462    4264 round_trippers.go:580]     Audit-Id: a836f19f-eb09-4fce-8275-a6e6a917b228
	I1028 11:15:56.063462    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:56.063462    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:56.063462    4264 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"551"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5346d8fb-0ba6-47f2-9519-bcaeee07c59f","resourceVersion":"300","creationTimestamp":"2024-10-28T11:14:39Z"}}]}
	I1028 11:15:56.063462    4264 default_sa.go:45] found service account: "default"
	I1028 11:15:56.063462    4264 default_sa.go:55] duration metric: took 6.0236ms for default service account to be created ...
	I1028 11:15:56.063462    4264 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:15:56.063462    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/namespaces/kube-system/pods
	I1028 11:15:56.063462    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:56.063462    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:56.063462    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:56.075975    4264 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1028 11:15:56.075975    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:56.075975    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:56.075975    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:56.075975    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:56.075975    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:56 GMT
	I1028 11:15:56.075975    4264 round_trippers.go:580]     Audit-Id: 3c4a3c68-1813-4725-be17-bc0ff3a89d86
	I1028 11:15:56.075975    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:56.076653    4264 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"551"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-9j7zg","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"b456d1cc-467f-4b3d-a619-bc9f17258666","resourceVersion":"540","creationTimestamp":"2024-10-28T11:14:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"7533fda5-404a-4712-bcb1-46f70a06c53e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-28T11:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7533fda5-404a-4712-bcb1-46f70a06c53e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53488 chars]
	I1028 11:15:56.078999    4264 system_pods.go:86] 7 kube-system pods found
	I1028 11:15:56.079075    4264 system_pods.go:89] "coredns-7c65d6cfc9-9j7zg" [b456d1cc-467f-4b3d-a619-bc9f17258666] Running
	I1028 11:15:56.079075    4264 system_pods.go:89] "etcd-functional-928900" [215e6379-1e1b-4c22-825f-3f69322a34de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 11:15:56.079075    4264 system_pods.go:89] "kube-apiserver-functional-928900" [00df3313-23ae-438c-8c10-438154994614] Running
	I1028 11:15:56.079075    4264 system_pods.go:89] "kube-controller-manager-functional-928900" [63481348-7064-474f-801d-90199e29226b] Running
	I1028 11:15:56.079158    4264 system_pods.go:89] "kube-proxy-spzh2" [3fd1a5b0-bd4d-488c-8df3-0a73d84993e2] Running
	I1028 11:15:56.079158    4264 system_pods.go:89] "kube-scheduler-functional-928900" [c11a0763-b378-4f84-b297-434e88feb23a] Running
	I1028 11:15:56.079158    4264 system_pods.go:89] "storage-provisioner" [b9804189-e6ae-4e66-9f4c-ec6a9431e6b3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 11:15:56.079191    4264 system_pods.go:126] duration metric: took 15.7282ms to wait for k8s-apps to be running ...
	I1028 11:15:56.079191    4264 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:15:56.088741    4264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:15:56.112577    4264 system_svc.go:56] duration metric: took 33.3859ms WaitForService to wait for kubelet
	I1028 11:15:56.112577    4264 kubeadm.go:582] duration metric: took 23.079524s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:15:56.112577    4264 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:15:56.112577    4264 round_trippers.go:463] GET https://127.0.0.1:59551/api/v1/nodes
	I1028 11:15:56.112577    4264 round_trippers.go:469] Request Headers:
	I1028 11:15:56.112577    4264 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:15:56.112577    4264 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1028 11:15:56.120919    4264 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1028 11:15:56.120919    4264 round_trippers.go:577] Response Headers:
	I1028 11:15:56.120919    4264 round_trippers.go:580]     Date: Mon, 28 Oct 2024 11:15:56 GMT
	I1028 11:15:56.120919    4264 round_trippers.go:580]     Audit-Id: fdf3da09-4b25-4ec2-a2b2-d67a6ea313f8
	I1028 11:15:56.120919    4264 round_trippers.go:580]     Cache-Control: no-cache, private
	I1028 11:15:56.120919    4264 round_trippers.go:580]     Content-Type: application/json
	I1028 11:15:56.120919    4264 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b2dcb485-779f-4ce8-938d-fcfc4480f92c
	I1028 11:15:56.120919    4264 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2f4deab4-4969-4126-8394-d17505669045
	I1028 11:15:56.120919    4264 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"552"},"items":[{"metadata":{"name":"functional-928900","uid":"c7dc49f8-e0c9-4799-884c-da76f8c1543f","resourceVersion":"394","creationTimestamp":"2024-10-28T11:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-928900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f","minikube.k8s.io/name":"functional-928900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_28T11_14_35_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4907 chars]
	I1028 11:15:56.121681    4264 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1028 11:15:56.121681    4264 node_conditions.go:123] node cpu capacity is 16
	I1028 11:15:56.121681    4264 node_conditions.go:105] duration metric: took 9.1037ms to run NodePressure ...
	I1028 11:15:56.121681    4264 start.go:241] waiting for startup goroutines ...
	I1028 11:15:56.121681    4264 start.go:246] waiting for cluster config update ...
	I1028 11:15:56.122243    4264 start.go:255] writing updated cluster config ...
	I1028 11:15:56.134452    4264 ssh_runner.go:195] Run: rm -f paused
	I1028 11:15:56.273233    4264 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:15:56.278120    4264 out.go:177] * Done! kubectl is now configured to use "functional-928900" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 28 11:15:29 functional-928900 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Oct 28 11:15:30 functional-928900 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Oct 28 11:15:30 functional-928900 systemd[1]: cri-docker.service: Deactivated successfully.
	Oct 28 11:15:30 functional-928900 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Oct 28 11:15:30 functional-928900 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Start docker client with request timeout 0s"
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Loaded network plugin cni"
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 28 11:15:30 functional-928900 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 28 11:15:30 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:30Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-9j7zg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9637a9771e19002e14ae75c81f9db140667aa73820fd2c5e7201ddf621972dcf\""
	Oct 28 11:15:37 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06e484c1a452a298db8108e32998a64b5314e5acfd2470b0d86dcdfaee71aee5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 28 11:15:37 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e00254b3614b420e74c8cb96194692f362c31c4b09b1032ab41a0383a0c4eb24/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 28 11:15:37 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/773c0388c5d6373345a476776adcf083a280fb1d42e56da435cb49b7c9c21ed9/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 28 11:15:38 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eddb806ee6ef8a59195908aac3a462bcb69b8de41f11fede139dc74581363c4b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 28 11:15:38 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c5f71040bb49b84ca5d81e6612e244955ef8ceab11b3a78054186be68fb7b46/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 28 11:15:38 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/53188ca0641289388309517fec12abd0577a672500be97fe8eeae0fc2d1ae6d2/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 28 11:15:38 functional-928900 cri-dockerd[4942]: time="2024-10-28T11:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cac727128b1df79c99a73753b07ee5ab644cfb38ea0e7552034d2387161d83d1/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 28 11:15:38 functional-928900 dockerd[4647]: time="2024-10-28T11:15:38.747319291Z" level=info msg="ignoring event" container=e09f909653ddd7257480c855a93db40aa1ba8194e44164f072a0479dd1f5d49f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	eff22e4d28b3c       6e38f40d628db       19 seconds ago       Running             storage-provisioner       2                   06e484c1a452a       storage-provisioner
	c15c0f5aa6ad9       c69fa2e9cbf5f       36 seconds ago       Running             coredns                   1                   cac727128b1df       coredns-7c65d6cfc9-9j7zg
	da51e29246d66       2e96e5913fc06       37 seconds ago       Running             etcd                      1                   53188ca064128       etcd-functional-928900
	38a823e7313b2       505d571f5fd56       37 seconds ago       Running             kube-proxy                1                   9c5f71040bb49       kube-proxy-spzh2
	1cb5791fa8a46       847c7bc1a5418       37 seconds ago       Running             kube-scheduler            1                   eddb806ee6ef8       kube-scheduler-functional-928900
	6b663450ca322       0486b6c53a1b5       38 seconds ago       Running             kube-controller-manager   1                   773c0388c5d63       kube-controller-manager-functional-928900
	d26e861e34d84       9499c9960544e       38 seconds ago       Running             kube-apiserver            1                   e00254b3614b4       kube-apiserver-functional-928900
	e09f909653ddd       6e38f40d628db       38 seconds ago       Exited              storage-provisioner       1                   06e484c1a452a       storage-provisioner
	916de43aaa517       c69fa2e9cbf5f       About a minute ago   Exited              coredns                   0                   9637a9771e190       coredns-7c65d6cfc9-9j7zg
	da74943d3b6bc       505d571f5fd56       About a minute ago   Exited              kube-proxy                0                   a75821ae5d69d       kube-proxy-spzh2
	f329c440d48f2       0486b6c53a1b5       About a minute ago   Exited              kube-controller-manager   0                   2a57cd43090b6       kube-controller-manager-functional-928900
	6a05f72b60d2e       847c7bc1a5418       About a minute ago   Exited              kube-scheduler            0                   eec3476fe92e7       kube-scheduler-functional-928900
	16699a23990ac       9499c9960544e       About a minute ago   Exited              kube-apiserver            0                   d57577bf36f86       kube-apiserver-functional-928900
	fb77da6280ac0       2e96e5913fc06       About a minute ago   Exited              etcd                      0                   6394f90fe726d       etcd-functional-928900
	
	
	==> coredns [916de43aaa51] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[111047832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (28-Oct-2024 11:14:43.326) (total time: 21043ms):
	Trace[111047832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21042ms (11:15:04.367)
	Trace[111047832]: [21.043559887s] [21.043559887s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[951065757]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (28-Oct-2024 11:14:43.326) (total time: 21043ms):
	Trace[951065757]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21042ms (11:15:04.367)
	Trace[951065757]: [21.043316862s] [21.043316862s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[627929958]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (28-Oct-2024 11:14:43.326) (total time: 21043ms):
	Trace[627929958]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21043ms (11:15:04.367)
	Trace[627929958]: [21.043721301s] [21.043721301s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c15c0f5aa6ad] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53492 - 9126 "HINFO IN 3320163939583925462.4467833034023351419. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.088658525s
	
	
	==> describe nodes <==
	Name:               functional-928900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-928900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=functional-928900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_14_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:14:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-928900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:16:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:16:07 +0000   Mon, 28 Oct 2024 11:14:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:16:07 +0000   Mon, 28 Oct 2024 11:14:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:16:07 +0000   Mon, 28 Oct 2024 11:14:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:16:07 +0000   Mon, 28 Oct 2024 11:14:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-928900
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868684Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868684Ki
	  pods:               110
	System Info:
	  Machine ID:                 48f6f6d47ed849b48af06d65305f6f42
	  System UUID:                48f6f6d47ed849b48af06d65305f6f42
	  Boot ID:                    ef217568-0e74-4f75-a115-0b78189354fe
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9j7zg                     100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     96s
	  kube-system                 etcd-functional-928900                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         103s
	  kube-system                 kube-apiserver-functional-928900             250m (1%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-controller-manager-functional-928900    200m (1%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-proxy-spzh2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-functional-928900             100m (0%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age   From             Message
	  ----     ------                             ----  ----             -------
	  Normal   Starting                           92s   kube-proxy       
	  Normal   Starting                           31s   kube-proxy       
	  Warning  PossibleMemoryBackedVolumesOnDisk  101s  kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           101s  kubelet          Starting kubelet.
	  Warning  CgroupV1                           101s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            101s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            100s  kubelet          Node functional-928900 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              100s  kubelet          Node functional-928900 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               100s  kubelet          Node functional-928900 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode                     97s   node-controller  Node functional-928900 event: Registered Node functional-928900 in Controller
	  Normal   NodeNotReady                       49s   kubelet          Node functional-928900 status is now: NodeNotReady
	  Normal   RegisteredNode                     27s   node-controller  Node functional-928900 event: Registered Node functional-928900 in Controller
	
	
	==> dmesg <==
	[  +0.000830] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000991] FS-Cache: N-cookie d=000000008807e26c{9P.session} n=000000009d4b8f73
	[  +0.001231] FS-Cache: N-key=[10] '34323934393337393835'
	[  +0.011303] WSL (2) ERROR: UtilCreateProcessAndWait:666: /bin/mount failed with 2
	[  +0.002309] WSL (1) ERROR: UtilCreateProcessAndWait:688: /bin/mount failed with status 0xff00
	
	[  +0.002554] WSL (1) ERROR: ConfigMountFsTab:2583: Processing fstab with mount -a failed.
	[  +0.003717] WSL (1) ERROR: ConfigApplyWindowsLibPath:2531: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.005999] WSL (3) ERROR: UtilCreateProcessAndWait:666: /bin/mount failed with 2
	[  +0.001944] WSL (1) ERROR: UtilCreateProcessAndWait:688: /bin/mount failed with status 0xff00
	
	[  +0.004085] WSL (4) ERROR: UtilCreateProcessAndWait:666: /bin/mount failed with 2
	[  +0.002085] WSL (1) ERROR: UtilCreateProcessAndWait:688: /bin/mount failed with status 0xff00
	
	[  +0.062137] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.108467] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.947927] netlink: 'init': attribute type 4 has an invalid length.
	[Oct28 11:02] tmpfs: Unknown parameter 'noswap'
	[  +9.507247] tmpfs: Unknown parameter 'noswap'
	[Oct28 11:12] tmpfs: Unknown parameter 'noswap'
	[ +10.180456] tmpfs: Unknown parameter 'noswap'
	[Oct28 11:13] tmpfs: Unknown parameter 'noswap'
	[Oct28 11:14] tmpfs: Unknown parameter 'noswap'
	[  +9.215927] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [da51e29246d6] <==
	{"level":"info","ts":"2024-10-28T11:15:42.116708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-10-28T11:15:42.116745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-10-28T11:15:42.116754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-10-28T11:15:42.124388Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-928900 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T11:15:42.124719Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:15:42.124875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:15:42.125126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T11:15:42.125159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T11:15:42.125912Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T11:15:42.125922Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T11:15:42.127258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-28T11:15:42.127580Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T11:15:44.122597Z","caller":"traceutil/trace.go:171","msg":"trace[267281756] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"198.845915ms","start":"2024-10-28T11:15:43.923613Z","end":"2024-10-28T11:15:44.122459Z","steps":["trace[267281756] 'process raft request'  (duration: 105.861034ms)","trace[267281756] 'compare'  (duration: 91.577033ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:15:44.123669Z","caller":"traceutil/trace.go:171","msg":"trace[773053251] linearizableReadLoop","detail":"{readStateIndex:458; appliedIndex:457; }","duration":"109.578925ms","start":"2024-10-28T11:15:44.014075Z","end":"2024-10-28T11:15:44.123654Z","steps":["trace[773053251] 'read index received'  (duration: 5.562885ms)","trace[773053251] 'applied index is now lower than readState.Index'  (duration: 104.01394ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:15:44.123853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.693438ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-10-28T11:15:44.123928Z","caller":"traceutil/trace.go:171","msg":"trace[769399783] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:437; }","duration":"109.843254ms","start":"2024-10-28T11:15:44.014070Z","end":"2024-10-28T11:15:44.123913Z","steps":["trace[769399783] 'agreement among raft nodes before linearized reading'  (duration: 109.661335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:15:44.214683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.312369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T11:15:44.214857Z","caller":"traceutil/trace.go:171","msg":"trace[2003255104] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:438; }","duration":"200.496288ms","start":"2024-10-28T11:15:44.014342Z","end":"2024-10-28T11:15:44.214839Z","steps":["trace[2003255104] 'agreement among raft nodes before linearized reading'  (duration: 200.213859ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T11:15:44.214876Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.481687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-928900\" ","response":"range_response_count:1 size:4443"}
	{"level":"info","ts":"2024-10-28T11:15:44.214930Z","caller":"traceutil/trace.go:171","msg":"trace[1067959395] range","detail":"{range_begin:/registry/minions/functional-928900; range_end:; response_count:1; response_revision:438; }","duration":"200.537792ms","start":"2024-10-28T11:15:44.014378Z","end":"2024-10-28T11:15:44.214916Z","steps":["trace[1067959395] 'agreement among raft nodes before linearized reading'  (duration: 200.453184ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:15:44.253635Z","caller":"traceutil/trace.go:171","msg":"trace[1799883742] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"124.473092ms","start":"2024-10-28T11:15:44.129143Z","end":"2024-10-28T11:15:44.253616Z","steps":["trace[1799883742] 'process raft request'  (duration: 98.68718ms)","trace[1799883742] 'compare'  (duration: 25.588691ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:15:45.434287Z","caller":"traceutil/trace.go:171","msg":"trace[1609002535] linearizableReadLoop","detail":"{readStateIndex:496; appliedIndex:495; }","duration":"102.161545ms","start":"2024-10-28T11:15:45.332104Z","end":"2024-10-28T11:15:45.434266Z","steps":["trace[1609002535] 'read index received'  (duration: 82.43257ms)","trace[1609002535] 'applied index is now lower than readState.Index'  (duration: 19.727075ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T11:15:45.434351Z","caller":"traceutil/trace.go:171","msg":"trace[1310737698] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"102.274457ms","start":"2024-10-28T11:15:45.332061Z","end":"2024-10-28T11:15:45.434335Z","steps":["trace[1310737698] 'process raft request'  (duration: 82.542182ms)","trace[1310737698] 'compare'  (duration: 19.506452ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T11:15:45.434547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.422273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:persistent-volume-provisioner\" ","response":"range_response_count:1 size:812"}
	{"level":"info","ts":"2024-10-28T11:15:45.434596Z","caller":"traceutil/trace.go:171","msg":"trace[1710405369] range","detail":"{range_begin:/registry/clusterroles/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:473; }","duration":"102.480079ms","start":"2024-10-28T11:15:45.332101Z","end":"2024-10-28T11:15:45.434581Z","steps":["trace[1710405369] 'agreement among raft nodes before linearized reading'  (duration: 102.328863ms)"],"step_count":1}
	
	
	==> etcd [fb77da6280ac] <==
	{"level":"info","ts":"2024-10-28T11:14:27.352531Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-928900 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T11:14:27.420730Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:14:27.420987Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T11:14:27.421023Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T11:14:27.420891Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:14:27.421316Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:14:27.423818Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T11:14:27.424764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T11:14:27.425860Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T11:14:27.426839Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:14:27.427284Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:14:27.427344Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:14:27.427370Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-28T11:14:41.542963Z","caller":"traceutil/trace.go:171","msg":"trace[1456010814] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"113.374748ms","start":"2024-10-28T11:14:41.429561Z","end":"2024-10-28T11:14:41.542936Z","steps":["trace[1456010814] 'process raft request'  (duration: 113.289239ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:14:41.543211Z","caller":"traceutil/trace.go:171","msg":"trace[1603347216] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"115.706474ms","start":"2024-10-28T11:14:41.427482Z","end":"2024-10-28T11:14:41.543188Z","steps":["trace[1603347216] 'process raft request'  (duration: 111.906704ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T11:15:17.617173Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-28T11:15:17.617271Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-928900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-10-28T11:15:17.617460Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T11:15:17.617516Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T11:15:17.619072Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T11:15:17.619277Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-28T11:15:17.816545Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-10-28T11:15:17.828645Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-28T11:15:17.828819Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-28T11:15:17.828840Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-928900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:16:15 up 20 min,  0 users,  load average: 1.91, 1.86, 1.30
	Linux functional-928900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [16699a23990a] <==
	W1028 11:15:26.864105       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.877047       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.885351       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.917366       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.917536       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.920214       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.926255       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.959440       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.962426       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.967074       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:26.977291       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.017452       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.058895       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.076472       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.172211       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.209789       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.364373       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.385034       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.401930       1 logging.go:55] [core] [Channel #6 SubChannel #7]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.423090       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.451903       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.486834       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.489780       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.501236       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 11:15:27.524798       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d26e861e34d8] <==
	I1028 11:15:43.662583       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1028 11:15:43.662742       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1028 11:15:43.662933       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I1028 11:15:43.662950       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I1028 11:15:43.663014       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1028 11:15:43.814644       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1028 11:15:43.816604       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 11:15:43.816667       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1028 11:15:43.816842       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 11:15:43.816910       1 policy_source.go:224] refreshing policies
	I1028 11:15:43.816861       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1028 11:15:43.816975       1 aggregator.go:171] initial CRD sync complete...
	I1028 11:15:43.816987       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 11:15:43.816996       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 11:15:43.816913       1 shared_informer.go:320] Caches are synced for configmaps
	I1028 11:15:43.913832       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1028 11:15:43.920692       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 11:15:44.013780       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 11:15:44.018816       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 11:15:44.018910       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 11:15:44.022107       1 cache.go:39] Caches are synced for autoregister controller
	I1028 11:15:44.031002       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1028 11:15:44.722158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 11:15:48.379789       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 11:15:48.580631       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6b663450ca32] <==
	I1028 11:15:48.377121       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1028 11:15:48.378029       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 11:15:48.383499       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 11:15:48.383525       1 shared_informer.go:320] Caches are synced for node
	I1028 11:15:48.383794       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1028 11:15:48.383923       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1028 11:15:48.383972       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 11:15:48.384013       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1028 11:15:48.384030       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1028 11:15:48.384092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-928900"
	I1028 11:15:48.415492       1 shared_informer.go:320] Caches are synced for taint
	I1028 11:15:48.415660       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1028 11:15:48.415736       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-928900"
	I1028 11:15:48.415533       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 11:15:48.415789       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1028 11:15:48.424574       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 11:15:48.424744       1 shared_informer.go:320] Caches are synced for GC
	I1028 11:15:48.426321       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 11:15:48.426507       1 shared_informer.go:320] Caches are synced for TTL
	I1028 11:15:48.820365       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 11:15:48.877416       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 11:15:48.877507       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 11:15:50.815867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="46.309312ms"
	I1028 11:15:50.816001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.108µs"
	I1028 11:16:07.570656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-928900"
	
	
	==> kube-controller-manager [f329c440d48f] <==
	I1028 11:14:38.661852       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 11:14:38.708083       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 11:14:38.723444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-928900"
	I1028 11:14:38.740808       1 shared_informer.go:320] Caches are synced for PV protection
	I1028 11:14:38.745025       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 11:14:38.747419       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 11:14:39.135200       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 11:14:39.135293       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 11:14:39.137107       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 11:14:39.354087       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-928900"
	I1028 11:14:39.734442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="321.239601ms"
	I1028 11:14:39.820546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="85.740755ms"
	I1028 11:14:39.820769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="67.607µs"
	I1028 11:14:39.837899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="249.125µs"
	I1028 11:14:41.621426       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="302.373263ms"
	I1028 11:14:41.720014       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="98.511399ms"
	I1028 11:14:41.720162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="90.409µs"
	I1028 11:14:43.641121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="97.41µs"
	I1028 11:14:43.765264       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.108µs"
	I1028 11:14:45.313151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-928900"
	I1028 11:14:53.785477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.612µs"
	I1028 11:14:54.192803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="89.409µs"
	I1028 11:14:54.197156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="63.106µs"
	I1028 11:15:10.794215       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.631108ms"
	I1028 11:15:10.794476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.306µs"
	
	
	==> kube-proxy [38a823e7313b] <==
	E1028 11:15:40.354697       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E1028 11:15:40.414506       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I1028 11:15:40.520882       1 server_linux.go:66] "Using iptables proxy"
	I1028 11:15:44.218335       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1028 11:15:44.218667       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:15:44.421301       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1028 11:15:44.422650       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:15:44.428211       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E1028 11:15:44.451306       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E1028 11:15:44.514248       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I1028 11:15:44.514918       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:15:44.515156       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:15:44.518262       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:15:44.518906       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:15:44.519045       1 config.go:328] "Starting node config controller"
	I1028 11:15:44.519057       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:15:44.519664       1 config.go:199] "Starting service config controller"
	I1028 11:15:44.519847       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:15:44.619294       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:15:44.621903       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:15:44.625261       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [da74943d3b6b] <==
	E1028 11:14:42.871687       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E1028 11:14:42.889549       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I1028 11:14:42.943434       1 server_linux.go:66] "Using iptables proxy"
	I1028 11:14:43.339831       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1028 11:14:43.340553       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:14:43.451100       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1028 11:14:43.451291       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:14:43.457296       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E1028 11:14:43.477299       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E1028 11:14:43.496198       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I1028 11:14:43.496283       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:14:43.496299       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:14:43.519657       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:14:43.519693       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:14:43.520157       1 config.go:328] "Starting node config controller"
	I1028 11:14:43.520175       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:14:43.521089       1 config.go:199] "Starting service config controller"
	I1028 11:14:43.521119       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:14:43.620331       1 shared_informer.go:320] Caches are synced for node config
	I1028 11:14:43.620566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 11:14:43.622097       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1cb5791fa8a4] <==
	I1028 11:15:41.357057       1 serving.go:386] Generated self-signed cert in-memory
	W1028 11:15:43.814798       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 11:15:43.815080       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 11:15:43.815188       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 11:15:43.815287       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 11:15:44.026676       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 11:15:44.026794       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:15:44.031337       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 11:15:44.031411       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 11:15:44.031428       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 11:15:44.031486       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 11:15:44.216306       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6a05f72b60d2] <==
	E1028 11:14:31.944653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:31.950601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 11:14:31.950703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.151700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 11:14:32.151865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.164369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:14:32.164486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.211379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 11:14:32.211490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.221123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 11:14:32.221227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.257718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 11:14:32.257830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.301535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 11:14:32.301647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.323304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 11:14:32.323464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.350585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:14:32.350689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.391360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:14:32.391518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:14:32.449018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:14:32.449133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 11:14:33.727514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:15:17.528231       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.522153    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb76c664fc316569bb97673423f97289d1a13559307f8fd7ae2b157268b9938"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.523102    2595 status_manager.go:851] "Failed to get status for pod" podUID="1aabb813122244e9ea32e3595201372c" pod="kube-system/etcd-functional-928900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-928900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.523543    2595 status_manager.go:851] "Failed to get status for pod" podUID="84fd6e349fc38958f545bfe6481372bf" pod="kube-system/kube-apiserver-functional-928900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.523742    2595 status_manager.go:851] "Failed to get status for pod" podUID="b456d1cc-467f-4b3d-a619-bc9f17258666" pod="kube-system/coredns-7c65d6cfc9-9j7zg" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9j7zg\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.524018    2595 status_manager.go:851] "Failed to get status for pod" podUID="b9804189-e6ae-4e66-9f4c-ec6a9431e6b3" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.524631    2595 status_manager.go:851] "Failed to get status for pod" podUID="34d1104724ea99f35885ccb979201f57" pod="kube-system/kube-scheduler-functional-928900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-928900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.525311    2595 status_manager.go:851] "Failed to get status for pod" podUID="1aabb813122244e9ea32e3595201372c" pod="kube-system/etcd-functional-928900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-928900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.525900    2595 status_manager.go:851] "Failed to get status for pod" podUID="84fd6e349fc38958f545bfe6481372bf" pod="kube-system/kube-apiserver-functional-928900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-928900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.526674    2595 status_manager.go:851] "Failed to get status for pod" podUID="c337e0ba137a0a3170a9fff62af5dd02" pod="kube-system/kube-controller-manager-functional-928900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-928900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:36 functional-928900 kubelet[2595]: I1028 11:15:36.527385    2595 status_manager.go:851] "Failed to get status for pod" podUID="3fd1a5b0-bd4d-488c-8df3-0a73d84993e2" pod="kube-system/kube-proxy-spzh2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-spzh2\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 28 11:15:38 functional-928900 kubelet[2595]: I1028 11:15:38.120953    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eddb806ee6ef8a59195908aac3a462bcb69b8de41f11fede139dc74581363c4b"
	Oct 28 11:15:38 functional-928900 kubelet[2595]: E1028 11:15:38.151932    2595 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-928900?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 28 11:15:38 functional-928900 kubelet[2595]: I1028 11:15:38.220229    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="773c0388c5d6373345a476776adcf083a280fb1d42e56da435cb49b7c9c21ed9"
	Oct 28 11:15:39 functional-928900 kubelet[2595]: I1028 11:15:39.029702    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00254b3614b420e74c8cb96194692f362c31c4b09b1032ab41a0383a0c4eb24"
	Oct 28 11:15:40 functional-928900 kubelet[2595]: I1028 11:15:40.027230    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c5f71040bb49b84ca5d81e6612e244955ef8ceab11b3a78054186be68fb7b46"
	Oct 28 11:15:40 functional-928900 kubelet[2595]: I1028 11:15:40.614546    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac727128b1df79c99a73753b07ee5ab644cfb38ea0e7552034d2387161d83d1"
	Oct 28 11:15:40 functional-928900 kubelet[2595]: I1028 11:15:40.718777    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e484c1a452a298db8108e32998a64b5314e5acfd2470b0d86dcdfaee71aee5"
	Oct 28 11:15:40 functional-928900 kubelet[2595]: I1028 11:15:40.719544    2595 scope.go:117] "RemoveContainer" containerID="e09f909653ddd7257480c855a93db40aa1ba8194e44164f072a0479dd1f5d49f"
	Oct 28 11:15:40 functional-928900 kubelet[2595]: E1028 11:15:40.720027    2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b9804189-e6ae-4e66-9f4c-ec6a9431e6b3)\"" pod="kube-system/storage-provisioner" podUID="b9804189-e6ae-4e66-9f4c-ec6a9431e6b3"
	Oct 28 11:15:40 functional-928900 kubelet[2595]: I1028 11:15:40.831656    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53188ca0641289388309517fec12abd0577a672500be97fe8eeae0fc2d1ae6d2"
	Oct 28 11:15:41 functional-928900 kubelet[2595]: I1028 11:15:41.899106    2595 scope.go:117] "RemoveContainer" containerID="8e3b836243f3baabdda1f96211b7990c80c65bf007a0c01bf60b20e602ad3280"
	Oct 28 11:15:41 functional-928900 kubelet[2595]: I1028 11:15:41.899489    2595 scope.go:117] "RemoveContainer" containerID="e09f909653ddd7257480c855a93db40aa1ba8194e44164f072a0479dd1f5d49f"
	Oct 28 11:15:41 functional-928900 kubelet[2595]: E1028 11:15:41.899656    2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b9804189-e6ae-4e66-9f4c-ec6a9431e6b3)\"" pod="kube-system/storage-provisioner" podUID="b9804189-e6ae-4e66-9f4c-ec6a9431e6b3"
	Oct 28 11:15:43 functional-928900 kubelet[2595]: E1028 11:15:43.717712    2595 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Oct 28 11:15:56 functional-928900 kubelet[2595]: I1028 11:15:56.735440    2595 scope.go:117] "RemoveContainer" containerID="e09f909653ddd7257480c855a93db40aa1ba8194e44164f072a0479dd1f5d49f"
	
	
	==> storage-provisioner [e09f909653dd] <==
	I1028 11:15:38.640376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 11:15:38.642564       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [eff22e4d28b3] <==
	I1028 11:15:57.009573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 11:15:57.027265       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 11:15:57.027430       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 11:16:14.452746       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 11:16:14.453176       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-928900_2a5bdedc-9c0f-49a1-a46b-932549a41118!
	I1028 11:16:14.453200       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9032eb9a-db86-43e3-8a9e-b8f0659bd41b", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-928900_2a5bdedc-9c0f-49a1-a46b-932549a41118 became leader
	I1028 11:16:14.554268       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-928900_2a5bdedc-9c0f-49a1-a46b-932549a41118!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-928900 -n functional-928900
helpers_test.go:261: (dbg) Run:  kubectl --context functional-928900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (409.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-013200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-013200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: exit status 102 (6m42.2829356s)

                                                
                                                
-- stdout --
	* [old-k8s-version-013200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-013200" primary control-plane node in "old-k8s-version-013200" cluster
	* Pulling base image v0.0.45-1729876044-19868 ...
	* Restarting existing docker container for "old-k8s-version-013200" ...
	* Preparing Kubernetes v1.20.0 on Docker 27.3.1 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-013200 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:26:55.879922    4716 out.go:345] Setting OutFile to fd 1496 ...
	I1028 12:26:55.953162    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:26:55.953162    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:26:55.953162    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:26:55.979002    4716 out.go:352] Setting JSON to false
	I1028 12:26:55.983001    4716 start.go:129] hostinfo: {"hostname":"minikube4","uptime":5512,"bootTime":1730112903,"procs":215,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5073 Build 19045.5073","kernelVersion":"10.0.19045.5073 Build 19045.5073","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1028 12:26:55.983001    4716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 12:26:55.986997    4716 out.go:177] * [old-k8s-version-013200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	I1028 12:26:55.992995    4716 notify.go:220] Checking for updates...
	I1028 12:26:55.998000    4716 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 12:26:56.004994    4716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:26:56.007992    4716 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1028 12:26:56.017001    4716 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:26:56.019993    4716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:26:56.022992    4716 config.go:182] Loaded profile config "old-k8s-version-013200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 12:26:56.026002    4716 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 12:26:56.028001    4716 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:26:56.207000    4716 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.2 (167172)
	I1028 12:26:56.216002    4716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 12:26:56.570428    4716 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:95 SystemTime:2024-10-28 12:26:56.538888094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 12:26:56.574433    4716 out.go:177] * Using the docker driver based on existing profile
	I1028 12:26:56.576430    4716 start.go:297] selected driver: docker
	I1028 12:26:56.576430    4716 start.go:901] validating driver "docker" against &{Name:old-k8s-version-013200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013200 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:26:56.576430    4716 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:26:56.647874    4716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 12:26:57.017489    4716 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:95 SystemTime:2024-10-28 12:26:56.984641843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 12:26:57.019228    4716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:26:57.019228    4716 cni.go:84] Creating CNI manager for ""
	I1028 12:26:57.019228    4716 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 12:26:57.020012    4716 start.go:340] cluster config:
	{Name:old-k8s-version-013200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:26:57.027004    4716 out.go:177] * Starting "old-k8s-version-013200" primary control-plane node in "old-k8s-version-013200" cluster
	I1028 12:26:57.029999    4716 cache.go:121] Beginning downloading kic base image for docker with docker
	I1028 12:26:57.032003    4716 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1028 12:26:57.044004    4716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 12:26:57.044828    4716 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 12:26:57.045005    4716 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1028 12:26:57.045005    4716 cache.go:56] Caching tarball of preloaded images
	I1028 12:26:57.045005    4716 preload.go:172] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 12:26:57.045005    4716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1028 12:26:57.046003    4716 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\config.json ...
	I1028 12:26:57.164897    4716 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
	I1028 12:26:57.164897    4716 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
	I1028 12:26:57.164897    4716 cache.go:194] Successfully downloaded all kic artifacts
	I1028 12:26:57.164897    4716 start.go:360] acquireMachinesLock for old-k8s-version-013200: {Name:mk707055a5461a0e90e725a06751d287bc4bf272 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:26:57.164897    4716 start.go:364] duration metric: took 0s to acquireMachinesLock for "old-k8s-version-013200"
	I1028 12:26:57.164897    4716 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:26:57.164897    4716 fix.go:54] fixHost starting: 
	I1028 12:26:57.184914    4716 cli_runner.go:164] Run: docker container inspect old-k8s-version-013200 --format={{.State.Status}}
	I1028 12:26:57.252891    4716 fix.go:112] recreateIfNeeded on old-k8s-version-013200: state=Stopped err=<nil>
	W1028 12:26:57.252891    4716 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:26:57.255882    4716 out.go:177] * Restarting existing docker container for "old-k8s-version-013200" ...
	I1028 12:26:57.274883    4716 cli_runner.go:164] Run: docker start old-k8s-version-013200
	I1028 12:26:58.072635    4716 cli_runner.go:164] Run: docker container inspect old-k8s-version-013200 --format={{.State.Status}}
	I1028 12:26:58.171650    4716 kic.go:430] container "old-k8s-version-013200" state is running.
	I1028 12:26:58.184651    4716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-013200
	I1028 12:26:58.263657    4716 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\config.json ...
	I1028 12:26:58.266650    4716 machine.go:93] provisionDockerMachine start ...
	I1028 12:26:58.278647    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:26:58.376225    4716 main.go:141] libmachine: Using SSH client type: native
	I1028 12:26:58.377223    4716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 65116 <nil> <nil>}
	I1028 12:26:58.377223    4716 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:26:58.379235    4716 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1028 12:27:01.620106    4716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-013200
	
	I1028 12:27:01.620106    4716 ubuntu.go:169] provisioning hostname "old-k8s-version-013200"
	I1028 12:27:01.635093    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:01.739100    4716 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:01.740103    4716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 65116 <nil> <nil>}
	I1028 12:27:01.740103    4716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-013200 && echo "old-k8s-version-013200" | sudo tee /etc/hostname
	I1028 12:27:01.971309    4716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-013200
	
	I1028 12:27:01.981307    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:02.080063    4716 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:02.080063    4716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 65116 <nil> <nil>}
	I1028 12:27:02.080063    4716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-013200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-013200/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-013200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:27:02.281088    4716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:27:02.281088    4716 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1028 12:27:02.281088    4716 ubuntu.go:177] setting up certificates
	I1028 12:27:02.281088    4716 provision.go:84] configureAuth start
	I1028 12:27:02.294142    4716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-013200
	I1028 12:27:02.385416    4716 provision.go:143] copyHostCerts
	I1028 12:27:02.386395    4716 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1028 12:27:02.386395    4716 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1028 12:27:02.386395    4716 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1028 12:27:02.387399    4716 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1028 12:27:02.387399    4716 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1028 12:27:02.388413    4716 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 12:27:02.389418    4716 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1028 12:27:02.389418    4716 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1028 12:27:02.390425    4716 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 12:27:02.391400    4716 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-013200 san=[127.0.0.1 192.168.121.2 localhost minikube old-k8s-version-013200]
	I1028 12:27:02.769099    4716 provision.go:177] copyRemoteCerts
	I1028 12:27:02.782103    4716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:27:02.790107    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:02.891020    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	I1028 12:27:03.042458    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:27:03.106993    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:27:03.168449    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:27:03.228047    4716 provision.go:87] duration metric: took 946.9211ms to configureAuth
	I1028 12:27:03.228047    4716 ubuntu.go:193] setting minikube options for container-runtime
	I1028 12:27:03.229052    4716 config.go:182] Loaded profile config "old-k8s-version-013200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 12:27:03.241084    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:03.345817    4716 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:03.346827    4716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 65116 <nil> <nil>}
	I1028 12:27:03.346827    4716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 12:27:03.550776    4716 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1028 12:27:03.550776    4716 ubuntu.go:71] root file system type: overlay
	I1028 12:27:03.550776    4716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 12:27:03.562070    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:03.653747    4716 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:03.654751    4716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 65116 <nil> <nil>}
	I1028 12:27:03.654751    4716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 12:27:03.877976    4716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 12:27:03.890756    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:03.999819    4716 main.go:141] libmachine: Using SSH client type: native
	I1028 12:27:04.000820    4716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 65116 <nil> <nil>}
	I1028 12:27:04.000820    4716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 12:27:04.215452    4716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:27:04.216405    4716 machine.go:96] duration metric: took 5.9495174s to provisionDockerMachine
	I1028 12:27:04.216405    4716 start.go:293] postStartSetup for "old-k8s-version-013200" (driver="docker")
	I1028 12:27:04.216405    4716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:27:04.239159    4716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:27:04.252055    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:04.327537    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	I1028 12:27:04.476899    4716 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:27:04.486119    4716 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 12:27:04.486119    4716 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 12:27:04.486119    4716 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 12:27:04.486119    4716 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 12:27:04.486119    4716 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1028 12:27:04.487130    4716 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1028 12:27:04.488115    4716 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem -> 111762.pem in /etc/ssl/certs
	I1028 12:27:04.499135    4716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:27:04.523119    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem --> /etc/ssl/certs/111762.pem (1708 bytes)
	I1028 12:27:04.579872    4716 start.go:296] duration metric: took 363.4522ms for postStartSetup
	I1028 12:27:04.590784    4716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:27:04.598790    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:04.676672    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	I1028 12:27:04.810163    4716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 12:27:04.826143    4716 fix.go:56] duration metric: took 7.6609397s for fixHost
	I1028 12:27:04.826143    4716 start.go:83] releasing machines lock for "old-k8s-version-013200", held for 7.6609397s
	I1028 12:27:04.838672    4716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-013200
	I1028 12:27:04.916426    4716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 12:27:04.928432    4716 ssh_runner.go:195] Run: cat /version.json
	I1028 12:27:04.928432    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:04.942441    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:05.015908    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	I1028 12:27:05.027901    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	W1028 12:27:05.139905    4716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 12:27:05.158910    4716 ssh_runner.go:195] Run: systemctl --version
	I1028 12:27:05.179918    4716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 12:27:05.198892    4716 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1028 12:27:05.218904    4716 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	W1028 12:27:05.227900    4716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1028 12:27:05.227900    4716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 12:27:05.234897    4716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1028 12:27:05.286721    4716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1028 12:27:05.316881    4716 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:27:05.317017    4716 start.go:495] detecting cgroup driver to use...
	I1028 12:27:05.317017    4716 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 12:27:05.317017    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:27:05.363732    4716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1028 12:27:05.401031    4716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 12:27:05.424060    4716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 12:27:05.436050    4716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 12:27:05.471040    4716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:27:05.505726    4716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 12:27:05.538683    4716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:27:05.585901    4716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:27:05.620176    4716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 12:27:05.657542    4716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:27:05.688545    4716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:27:05.719545    4716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:27:05.873120    4716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 12:27:06.124156    4716 start.go:495] detecting cgroup driver to use...
	I1028 12:27:06.124373    4716 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 12:27:06.140802    4716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 12:27:06.172715    4716 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1028 12:27:06.188714    4716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:27:06.215717    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:27:06.262324    4716 ssh_runner.go:195] Run: which cri-dockerd
	I1028 12:27:06.286332    4716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 12:27:06.304332    4716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1028 12:27:06.356029    4716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 12:27:06.589546    4716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 12:27:06.782768    4716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 12:27:06.782976    4716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 12:27:06.842698    4716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:27:07.104749    4716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:27:08.353537    4716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2487382s)
	I1028 12:27:08.363547    4716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 12:27:08.450067    4716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 12:27:08.517060    4716 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 27.3.1 ...
	I1028 12:27:08.533101    4716 cli_runner.go:164] Run: docker exec -t old-k8s-version-013200 dig +short host.docker.internal
	I1028 12:27:08.756060    4716 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1028 12:27:08.768056    4716 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1028 12:27:08.776064    4716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:27:08.813083    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:08.904173    4716 kubeadm.go:883] updating cluster {Name:old-k8s-version-013200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013200 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\je
nkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:27:08.905181    4716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 12:27:08.917207    4716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 12:27:08.970821    4716 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1028 12:27:08.970821    4716 docker.go:695] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I1028 12:27:08.980810    4716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 12:27:09.034230    4716 ssh_runner.go:195] Run: which lz4
	I1028 12:27:09.059226    4716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:27:09.065230    4716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:27:09.069509    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
	I1028 12:27:16.417679    4716 docker.go:653] duration metric: took 7.3721243s to copy over tarball
	I1028 12:27:16.429045    4716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:27:22.424044    4716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.9947581s)
	I1028 12:27:22.424167    4716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:27:22.530299    4716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1028 12:27:22.559573    4716 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2824 bytes)
	I1028 12:27:22.609172    4716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:27:22.806637    4716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:27:31.206794    4716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.399819s)
	I1028 12:27:31.225554    4716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 12:27:31.276513    4716 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1028 12:27:31.276639    4716 docker.go:695] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I1028 12:27:31.276769    4716 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:27:31.293412    4716 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:27:31.300430    4716 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:27:31.311444    4716 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:27:31.314411    4716 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:27:31.318411    4716 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:27:31.319410    4716 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:27:31.325416    4716 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:27:31.342412    4716 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:27:31.342412    4716 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:27:31.342412    4716 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:27:31.355425    4716 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:27:31.358428    4716 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:27:31.362431    4716 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:27:31.363438    4716 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:27:31.374425    4716 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:27:31.375424    4716 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	W1028 12:27:31.414200    4716 image.go:188] authn lookup for registry.k8s.io/kube-scheduler:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1028 12:27:31.497053    4716 image.go:188] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1028 12:27:31.588618    4716 image.go:188] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1028 12:27:31.671485    4716 image.go:188] authn lookup for registry.k8s.io/kube-proxy:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1028 12:27:31.753425    4716 image.go:188] authn lookup for registry.k8s.io/kube-apiserver:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1028 12:27:31.831270    4716 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1028 12:27:31.846422    4716 image.go:188] authn lookup for registry.k8s.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1028 12:27:31.855649    4716 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:27:31.883203    4716 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:27:31.883203    4716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I1028 12:27:31.883203    4716 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I1028 12:27:31.898205    4716 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1028 12:27:31.909216    4716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:27:31.909216    4716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I1028 12:27:31.909216    4716 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:27:31.919229    4716 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:27:31.949217    4716 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	W1028 12:27:31.950220    4716 image.go:188] authn lookup for registry.k8s.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1028 12:27:31.960205    4716 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:27:31.971220    4716 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I1028 12:27:32.001209    4716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:27:32.001209    4716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I1028 12:27:32.001209    4716 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:27:32.010207    4716 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
	W1028 12:27:32.039217    4716 image.go:188] authn lookup for registry.k8s.io/kube-controller-manager:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1028 12:27:32.044216    4716 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:27:32.057202    4716 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I1028 12:27:32.098061    4716 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:27:32.098061    4716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:27:32.098061    4716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I1028 12:27:32.098061    4716 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:27:32.108056    4716 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:27:32.137140    4716 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:27:32.137140    4716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.13-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I1028 12:27:32.137140    4716 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:27:32.144055    4716 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I1028 12:27:32.145073    4716 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:27:32.179544    4716 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:27:32.189549    4716 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I1028 12:27:32.220773    4716 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:27:32.295301    4716 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:27:32.296301    4716 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:27:32.296301    4716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.7.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I1028 12:27:32.296301    4716 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:27:32.306291    4716 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
	I1028 12:27:32.341193    4716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:27:32.341193    4716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I1028 12:27:32.341193    4716 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:27:32.353196    4716 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:27:32.353196    4716 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I1028 12:27:32.401857    4716 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I1028 12:27:32.402675    4716 cache_images.go:92] duration metric: took 1.1258614s to LoadCachedImages
	W1028 12:27:32.402889    4716 out.go:270] X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2: The system cannot find the file specified.
	X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2: The system cannot find the file specified.
	I1028 12:27:32.402971    4716 kubeadm.go:934] updating node { 192.168.121.2 8443 v1.20.0 docker true true} ...
	I1028 12:27:32.403287    4716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-013200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:27:32.414679    4716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 12:27:32.529717    4716 cni.go:84] Creating CNI manager for ""
	I1028 12:27:32.529717    4716 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 12:27:32.529717    4716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:27:32.530720    4716 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-013200 NodeName:old-k8s-version-013200 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:27:32.530720    4716 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-013200"
	  kubeletExtraArgs:
	    node-ip: 192.168.121.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:27:32.539733    4716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:27:32.560757    4716 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:27:32.574729    4716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:27:32.596729    4716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I1028 12:27:32.644885    4716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:27:32.682888    4716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2121 bytes)
	I1028 12:27:32.743686    4716 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1028 12:27:32.755694    4716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:27:32.789539    4716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:27:32.966571    4716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:27:32.994562    4716 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200 for IP: 192.168.121.2
	I1028 12:27:32.994562    4716 certs.go:194] generating shared ca certs ...
	I1028 12:27:32.994562    4716 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:27:32.994562    4716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1028 12:27:32.995582    4716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1028 12:27:32.995582    4716 certs.go:256] generating profile certs ...
	I1028 12:27:32.996565    4716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\client.key
	I1028 12:27:32.997568    4716 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\apiserver.key.e3750edd
	I1028 12:27:32.997568    4716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\proxy-client.key
	I1028 12:27:32.998586    4716 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176.pem (1338 bytes)
	W1028 12:27:32.998586    4716 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176_empty.pem, impossibly tiny 0 bytes
	I1028 12:27:32.999657    4716 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1028 12:27:32.999657    4716 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 12:27:32.999657    4716 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 12:27:33.000587    4716 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1028 12:27:33.000587    4716 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem (1708 bytes)
	I1028 12:27:33.002565    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:27:33.060868    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:27:33.125814    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:27:33.212305    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:27:33.270894    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:27:33.340408    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:27:33.418518    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:27:33.472584    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-013200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:27:33.553645    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem --> /usr/share/ca-certificates/111762.pem (1708 bytes)
	I1028 12:27:33.638625    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:27:33.699642    4716 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176.pem --> /usr/share/ca-certificates/11176.pem (1338 bytes)
	I1028 12:27:33.747644    4716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:27:33.798558    4716 ssh_runner.go:195] Run: openssl version
	I1028 12:27:33.841558    4716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:27:33.936317    4716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:27:34.015342    4716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:02 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:27:34.037331    4716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:27:34.083332    4716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:27:34.155081    4716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11176.pem && ln -fs /usr/share/ca-certificates/11176.pem /etc/ssl/certs/11176.pem"
	I1028 12:27:34.246056    4716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11176.pem
	I1028 12:27:34.313150    4716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:13 /usr/share/ca-certificates/11176.pem
	I1028 12:27:34.328701    4716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11176.pem
	I1028 12:27:34.360700    4716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11176.pem /etc/ssl/certs/51391683.0"
	I1028 12:27:34.398689    4716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111762.pem && ln -fs /usr/share/ca-certificates/111762.pem /etc/ssl/certs/111762.pem"
	I1028 12:27:34.451376    4716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111762.pem
	I1028 12:27:34.462974    4716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:13 /usr/share/ca-certificates/111762.pem
	I1028 12:27:34.475965    4716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111762.pem
	I1028 12:27:34.545947    4716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111762.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:27:34.637262    4716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:27:34.668246    4716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:27:34.745259    4716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:27:34.839248    4716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:27:34.930138    4716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:27:34.964142    4716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:27:34.999135    4716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:27:35.022816    4716 kubeadm.go:392] StartCluster: {Name:old-k8s-version-013200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-013200 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenki
ns.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:27:35.037283    4716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 12:27:35.146036    4716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:27:35.175539    4716 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:27:35.175539    4716 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:27:35.193170    4716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:27:35.215184    4716 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:27:35.232201    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:35.327451    4716 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-013200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 12:27:35.328448    4716 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-013200" cluster setting kubeconfig missing "old-k8s-version-013200" context setting]
	I1028 12:27:35.331459    4716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:27:35.368475    4716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:27:35.387442    4716 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I1028 12:27:35.388453    4716 kubeadm.go:597] duration metric: took 212.9052ms to restartPrimaryControlPlane
	I1028 12:27:35.388453    4716 kubeadm.go:394] duration metric: took 365.9104ms to StartCluster
	I1028 12:27:35.388453    4716 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:27:35.388453    4716 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 12:27:35.390458    4716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:27:35.392453    4716 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 12:27:35.392453    4716 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:27:35.393466    4716 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-013200"
	I1028 12:27:35.393466    4716 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-013200"
	I1028 12:27:35.393466    4716 addons.go:69] Setting dashboard=true in profile "old-k8s-version-013200"
	I1028 12:27:35.393466    4716 addons.go:234] Setting addon dashboard=true in "old-k8s-version-013200"
	W1028 12:27:35.393466    4716 addons.go:243] addon dashboard should already be in state true
	I1028 12:27:35.393466    4716 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-013200"
	I1028 12:27:35.393466    4716 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-013200"
	I1028 12:27:35.393466    4716 host.go:66] Checking if "old-k8s-version-013200" exists ...
	I1028 12:27:35.393466    4716 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-013200"
	W1028 12:27:35.393466    4716 addons.go:243] addon metrics-server should already be in state true
	W1028 12:27:35.393466    4716 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:27:35.393466    4716 host.go:66] Checking if "old-k8s-version-013200" exists ...
	I1028 12:27:35.393466    4716 host.go:66] Checking if "old-k8s-version-013200" exists ...
	I1028 12:27:35.393466    4716 config.go:182] Loaded profile config "old-k8s-version-013200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 12:27:35.393466    4716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-013200"
	I1028 12:27:35.424453    4716 cli_runner.go:164] Run: docker container inspect old-k8s-version-013200 --format={{.State.Status}}
	I1028 12:27:35.424453    4716 cli_runner.go:164] Run: docker container inspect old-k8s-version-013200 --format={{.State.Status}}
	I1028 12:27:35.428464    4716 cli_runner.go:164] Run: docker container inspect old-k8s-version-013200 --format={{.State.Status}}
	I1028 12:27:35.435460    4716 cli_runner.go:164] Run: docker container inspect old-k8s-version-013200 --format={{.State.Status}}
	I1028 12:27:35.458462    4716 out.go:177] * Verifying Kubernetes components...
	I1028 12:27:35.486482    4716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:27:35.527480    4716 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-013200"
	W1028 12:27:35.527480    4716 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:27:35.527480    4716 host.go:66] Checking if "old-k8s-version-013200" exists ...
	I1028 12:27:35.550457    4716 cli_runner.go:164] Run: docker container inspect old-k8s-version-013200 --format={{.State.Status}}
	I1028 12:27:35.561395    4716 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1028 12:27:35.561603    4716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:27:35.561966    4716 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:27:35.568090    4716 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:27:35.568090    4716 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:27:35.572807    4716 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1028 12:27:35.576809    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:35.617827    4716 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:27:35.617827    4716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:27:35.622831    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1028 12:27:35.622831    4716 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1028 12:27:35.632337    4716 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:27:35.632337    4716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:27:35.632862    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:35.642849    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:35.650874    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:35.724831    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	I1028 12:27:35.732877    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	I1028 12:27:35.735839    4716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:27:35.747858    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	I1028 12:27:35.759834    4716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65116 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-013200\id_rsa Username:docker}
	I1028 12:27:35.782842    4716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-013200
	I1028 12:27:35.872846    4716 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-013200" to be "Ready" ...
	I1028 12:27:35.896844    4716 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:27:35.896844    4716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:27:35.936485    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1028 12:27:35.936485    4716 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1028 12:27:35.940146    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:27:35.952014    4716 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:27:35.952014    4716 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:27:35.960023    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:27:36.010477    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1028 12:27:36.010477    4716 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1028 12:27:36.109577    4716 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:27:36.109713    4716 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:27:36.133078    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1028 12:27:36.133078    4716 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1028 12:27:36.250474    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:27:36.316444    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1028 12:27:36.316444    4716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1028 12:27:36.434582    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:36.434582    4716 retry.go:31] will retry after 127.025909ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:36.522991    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1028 12:27:36.522991    4716 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1028 12:27:36.533973    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:36.533973    4716 retry.go:31] will retry after 130.935571ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:36.573963    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:27:36.634900    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1028 12:27:36.634940    4716 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1028 12:27:36.673946    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1028 12:27:36.813332    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:36.813675    4716 retry.go:31] will retry after 161.068571ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:36.820851    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1028 12:27:36.820851    4716 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1028 12:27:36.988390    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:27:37.009745    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1028 12:27:37.009987    4716 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1028 12:27:37.126736    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:37.126736    4716 retry.go:31] will retry after 407.220432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:37.233533    4716 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1028 12:27:37.233533    4716 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W1028 12:27:37.415525    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:37.415672    4716 retry.go:31] will retry after 385.622898ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:37.441971    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1028 12:27:37.547392    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1028 12:27:37.815695    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:37.815827    4716 retry.go:31] will retry after 498.208512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:37.821711    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1028 12:27:38.124901    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:38.124901    4716 retry.go:31] will retry after 369.624858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1028 12:27:38.217259    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:38.217259    4716 retry.go:31] will retry after 490.667997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:38.334239    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1028 12:27:38.335646    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:38.335646    4716 retry.go:31] will retry after 784.485187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:38.505837    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1028 12:27:38.737907    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1028 12:27:38.921534    4716 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:38.921534    4716 retry.go:31] will retry after 695.095403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1028 12:27:39.142537    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:27:39.635783    4716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:27:48.918321    4716 node_ready.go:49] node "old-k8s-version-013200" has status "Ready":"True"
	I1028 12:27:48.918321    4716 node_ready.go:38] duration metric: took 13.0449501s for node "old-k8s-version-013200" to be "Ready" ...
	I1028 12:27:48.918321    4716 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:27:49.329386    4716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-h4dhd" in "kube-system" namespace to be "Ready" ...
	I1028 12:27:49.918183    4716 pod_ready.go:93] pod "coredns-74ff55c5b-h4dhd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:27:49.918183    4716 pod_ready.go:82] duration metric: took 588.7734ms for pod "coredns-74ff55c5b-h4dhd" in "kube-system" namespace to be "Ready" ...
	I1028 12:27:49.918183    4716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-013200" in "kube-system" namespace to be "Ready" ...
	I1028 12:27:52.019842    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:27:53.615701    4716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.1091292s)
	I1028 12:27:53.616174    4716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (14.8776687s)
	I1028 12:27:53.616371    4716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (14.4732526s)
	I1028 12:27:53.616568    4716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.9802232s)
	I1028 12:27:53.616568    4716 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-013200"
	I1028 12:27:53.629066    4716 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-013200 addons enable metrics-server
	
	I1028 12:27:53.822550    4716 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1028 12:27:53.836129    4716 addons.go:510] duration metric: took 18.442934s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1028 12:27:54.025217    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:27:56.443183    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:27:58.446629    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:00.944466    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:03.437073    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:05.937936    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:07.940516    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:10.627995    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:12.942652    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:14.957733    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:17.449244    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:19.941747    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:22.438449    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:24.937396    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:26.938585    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:28.938935    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:30.939105    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:33.437279    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:35.441608    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:37.937512    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:39.938088    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:41.947645    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:44.439309    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:46.940285    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:49.439406    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:51.941342    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:54.441676    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:56.459235    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:28:58.939311    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:00.939645    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:02.945004    4716 pod_ready.go:103] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:04.441432    4716 pod_ready.go:93] pod "etcd-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"True"
	I1028 12:29:04.441497    4716 pod_ready.go:82] duration metric: took 1m14.520307s for pod "etcd-old-k8s-version-013200" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:04.441497    4716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-013200" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:04.465704    4716 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"True"
	I1028 12:29:04.465760    4716 pod_ready.go:82] duration metric: took 24.0524ms for pod "kube-apiserver-old-k8s-version-013200" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:04.465760    4716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-013200" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:06.483926    4716 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:08.983277    4716 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:10.984021    4716 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:13.494670    4716 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"True"
	I1028 12:29:13.494670    4716 pod_ready.go:82] duration metric: took 9.0285454s for pod "kube-controller-manager-old-k8s-version-013200" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:13.494670    4716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wm5p7" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:13.508689    4716 pod_ready.go:93] pod "kube-proxy-wm5p7" in "kube-system" namespace has status "Ready":"True"
	I1028 12:29:13.508689    4716 pod_ready.go:82] duration metric: took 14.0187ms for pod "kube-proxy-wm5p7" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:13.508689    4716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-013200" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:13.521687    4716 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-013200" in "kube-system" namespace has status "Ready":"True"
	I1028 12:29:13.521687    4716 pod_ready.go:82] duration metric: took 12.9974ms for pod "kube-scheduler-old-k8s-version-013200" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:13.521687    4716 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace to be "Ready" ...
	I1028 12:29:15.536597    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:17.537809    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:19.539489    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:21.548723    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:24.038493    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:26.041267    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:28.538169    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:31.041550    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:33.046407    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:35.548246    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:38.042697    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:40.536649    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:42.537087    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:44.538084    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:46.538591    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:49.038579    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:51.039212    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:53.052655    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:55.541176    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:29:58.038297    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:00.040809    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:02.042259    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:04.540119    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:06.540605    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:09.039787    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:11.041721    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:13.541834    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:16.043125    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:18.539332    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:20.540964    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:22.542598    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:25.047466    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:27.540036    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:29.540346    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:32.042749    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:34.045330    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:36.540934    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:38.542488    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:40.543257    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:43.045130    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:45.540908    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:48.042910    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:50.043234    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:52.046327    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:54.539148    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:56.556256    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:30:59.045712    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:01.540117    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:03.545128    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:06.047001    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:08.542018    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:11.041432    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:13.042370    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:15.044027    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:17.540929    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:19.553768    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:22.042587    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:24.540964    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:26.541202    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:28.541844    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:30.542997    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:33.056808    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:35.541948    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:37.542768    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:40.043585    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:42.545785    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:44.548933    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:47.040722    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:49.542466    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:51.543404    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:53.545926    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:56.046220    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:31:58.543031    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:00.543792    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:02.544074    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:04.544425    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:07.053929    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:09.543029    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:11.548784    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:14.048407    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:16.542990    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:18.546065    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:20.547136    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:23.043206    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:25.047067    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:27.544836    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:30.044949    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:32.051950    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:34.544412    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:36.545218    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:38.546339    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:41.045343    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:43.895556    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:46.050221    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:48.052091    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:50.557193    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:53.052144    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:55.544167    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:57.551604    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:00.044329    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:02.047572    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:04.544391    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:06.546456    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:08.547134    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:10.553851    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:13.048123    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:13.531597    4716 pod_ready.go:82] duration metric: took 4m0.0000834s for pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace to be "Ready" ...
	E1028 12:33:13.531597    4716 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:33:13.531597    4716 pod_ready.go:39] duration metric: took 5m24.6000367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:33:13.531597    4716 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:33:13.543575    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:33:13.594586    4716 logs.go:282] 2 containers: [17aaa55a6fdb 49e5e06a6361]
	I1028 12:33:13.603577    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:33:13.654612    4716 logs.go:282] 2 containers: [642d04757828 cc895678b294]
	I1028 12:33:13.666568    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:33:13.722587    4716 logs.go:282] 2 containers: [b380cacb66c6 4e391fcae110]
	I1028 12:33:13.733577    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:33:13.790617    4716 logs.go:282] 2 containers: [8379b070c9db 9ce4d10d6386]
	I1028 12:33:13.811823    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:33:13.859826    4716 logs.go:282] 2 containers: [0a1c612f812e ee68d9004e36]
	I1028 12:33:13.869818    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:33:13.921817    4716 logs.go:282] 2 containers: [84a52451395b 1a4a898cd699]
	I1028 12:33:13.940838    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:33:13.985834    4716 logs.go:282] 0 containers: []
	W1028 12:33:13.985834    4716 logs.go:284] No container was found matching "kindnet"
	I1028 12:33:14.001819    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:33:14.052812    4716 logs.go:282] 2 containers: [7fe2f6b267f7 befcb830733f]
	I1028 12:33:14.065055    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:33:14.121668    4716 logs.go:282] 1 containers: [7f41acfe30e7]
	I1028 12:33:14.121668    4716 logs.go:123] Gathering logs for kube-scheduler [8379b070c9db] ...
	I1028 12:33:14.121668    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8379b070c9db"
	I1028 12:33:14.178010    4716 logs.go:123] Gathering logs for kube-apiserver [17aaa55a6fdb] ...
	I1028 12:33:14.178010    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17aaa55a6fdb"
	I1028 12:33:14.257621    4716 logs.go:123] Gathering logs for etcd [642d04757828] ...
	I1028 12:33:14.257621    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642d04757828"
	I1028 12:33:14.336625    4716 logs.go:123] Gathering logs for kube-proxy [0a1c612f812e] ...
	I1028 12:33:14.336625    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a1c612f812e"
	I1028 12:33:14.396735    4716 logs.go:123] Gathering logs for kube-proxy [ee68d9004e36] ...
	I1028 12:33:14.396735    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee68d9004e36"
	I1028 12:33:14.449712    4716 logs.go:123] Gathering logs for Docker ...
	I1028 12:33:14.449712    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:33:14.502470    4716 logs.go:123] Gathering logs for dmesg ...
	I1028 12:33:14.502470    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:33:14.540108    4716 logs.go:123] Gathering logs for etcd [cc895678b294] ...
	I1028 12:33:14.540108    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc895678b294"
	I1028 12:33:14.624281    4716 logs.go:123] Gathering logs for kube-scheduler [9ce4d10d6386] ...
	I1028 12:33:14.624281    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce4d10d6386"
	I1028 12:33:14.695343    4716 logs.go:123] Gathering logs for kube-controller-manager [84a52451395b] ...
	I1028 12:33:14.695343    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a52451395b"
	I1028 12:33:14.769374    4716 logs.go:123] Gathering logs for storage-provisioner [7fe2f6b267f7] ...
	I1028 12:33:14.769374    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe2f6b267f7"
	I1028 12:33:14.829346    4716 logs.go:123] Gathering logs for storage-provisioner [befcb830733f] ...
	I1028 12:33:14.829346    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 befcb830733f"
	I1028 12:33:14.877347    4716 logs.go:123] Gathering logs for kube-apiserver [49e5e06a6361] ...
	I1028 12:33:14.877347    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49e5e06a6361"
	I1028 12:33:14.998130    4716 logs.go:123] Gathering logs for coredns [4e391fcae110] ...
	I1028 12:33:14.998130    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e391fcae110"
	I1028 12:33:15.059123    4716 logs.go:123] Gathering logs for coredns [b380cacb66c6] ...
	I1028 12:33:15.059123    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b380cacb66c6"
	I1028 12:33:15.118136    4716 logs.go:123] Gathering logs for kube-controller-manager [1a4a898cd699] ...
	I1028 12:33:15.118136    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a4a898cd699"
	I1028 12:33:15.208127    4716 logs.go:123] Gathering logs for kubernetes-dashboard [7f41acfe30e7] ...
	I1028 12:33:15.208127    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f41acfe30e7"
	I1028 12:33:15.261132    4716 logs.go:123] Gathering logs for container status ...
	I1028 12:33:15.261132    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:33:15.371385    4716 logs.go:123] Gathering logs for kubelet ...
	I1028 12:33:15.371385    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:33:15.460552    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:54 old-k8s-version-013200 kubelet[1888]: E1028 12:27:54.029375    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.466361    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:55 old-k8s-version-013200 kubelet[1888]: E1028 12:27:55.942392    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.466361    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:57 old-k8s-version-013200 kubelet[1888]: E1028 12:27:57.107195    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.479092    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:11 old-k8s-version-013200 kubelet[1888]: E1028 12:28:11.122198    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.484517    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:13 old-k8s-version-013200 kubelet[1888]: E1028 12:28:13.332946    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.485516    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:13 old-k8s-version-013200 kubelet[1888]: E1028 12:28:13.846655    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.486571    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:14 old-k8s-version-013200 kubelet[1888]: E1028 12:28:14.881950    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.487513    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:16 old-k8s-version-013200 kubelet[1888]: E1028 12:28:16.965583    1888 pod_workers.go:191] Error syncing pod 34dc73e1-5d6a-469b-90d3-812ffa9e7fe0 ("storage-provisioner_kube-system(34dc73e1-5d6a-469b-90d3-812ffa9e7fe0)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(34dc73e1-5d6a-469b-90d3-812ffa9e7fe0)"
	W1028 12:33:15.487513    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:23 old-k8s-version-013200 kubelet[1888]: E1028 12:28:23.025449    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.491518    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:35 old-k8s-version-013200 kubelet[1888]: E1028 12:28:35.546175    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.495526    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:38 old-k8s-version-013200 kubelet[1888]: E1028 12:28:38.073497    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.496104    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:50 old-k8s-version-013200 kubelet[1888]: E1028 12:28:50.021427    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.496298    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:51 old-k8s-version-013200 kubelet[1888]: E1028 12:28:51.020441    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.496555    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:02 old-k8s-version-013200 kubelet[1888]: E1028 12:29:02.017808    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.500406    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:03 old-k8s-version-013200 kubelet[1888]: E1028 12:29:03.445268    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.500406    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:15 old-k8s-version-013200 kubelet[1888]: E1028 12:29:15.015992    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.501395    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:17 old-k8s-version-013200 kubelet[1888]: E1028 12:29:17.032348    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.504277    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:27 old-k8s-version-013200 kubelet[1888]: E1028 12:29:27.070258    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.504277    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:33 old-k8s-version-013200 kubelet[1888]: E1028 12:29:33.013280    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.504277    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:42 old-k8s-version-013200 kubelet[1888]: E1028 12:29:42.013804    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.507568    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:44 old-k8s-version-013200 kubelet[1888]: E1028 12:29:44.435765    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.507568    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:53 old-k8s-version-013200 kubelet[1888]: E1028 12:29:53.013992    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.508458    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:59 old-k8s-version-013200 kubelet[1888]: E1028 12:29:59.014335    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.508804    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:06 old-k8s-version-013200 kubelet[1888]: E1028 12:30:06.009919    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.509276    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:11 old-k8s-version-013200 kubelet[1888]: E1028 12:30:11.010136    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.509620    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:18 old-k8s-version-013200 kubelet[1888]: E1028 12:30:18.010397    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.510277    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:23 old-k8s-version-013200 kubelet[1888]: E1028 12:30:23.025022    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.510508    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:33 old-k8s-version-013200 kubelet[1888]: E1028 12:30:33.007126    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.510807    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:38 old-k8s-version-013200 kubelet[1888]: E1028 12:30:38.006958    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.511182    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:46 old-k8s-version-013200 kubelet[1888]: E1028 12:30:46.005686    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.511671    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:53 old-k8s-version-013200 kubelet[1888]: E1028 12:30:53.006426    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.520503    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:00 old-k8s-version-013200 kubelet[1888]: E1028 12:31:00.054849    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.522499    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:06 old-k8s-version-013200 kubelet[1888]: E1028 12:31:06.472718    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.522499    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:13 old-k8s-version-013200 kubelet[1888]: E1028 12:31:13.003165    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.522499    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:18 old-k8s-version-013200 kubelet[1888]: E1028 12:31:18.003362    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.522499    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:24 old-k8s-version-013200 kubelet[1888]: E1028 12:31:24.003108    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:30 old-k8s-version-013200 kubelet[1888]: E1028 12:31:30.016995    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:38 old-k8s-version-013200 kubelet[1888]: E1028 12:31:38.998959    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:43 old-k8s-version-013200 kubelet[1888]: E1028 12:31:43.999179    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:52 old-k8s-version-013200 kubelet[1888]: E1028 12:31:52.999572    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:58 old-k8s-version-013200 kubelet[1888]: E1028 12:31:58.999520    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.524497    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:08 old-k8s-version-013200 kubelet[1888]: E1028 12:32:08.001604    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.524497    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:10 old-k8s-version-013200 kubelet[1888]: E1028 12:32:10.996463    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.524497    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:22 old-k8s-version-013200 kubelet[1888]: E1028 12:32:22.996367    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.524497    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:23 old-k8s-version-013200 kubelet[1888]: E1028 12:32:23.996397    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.525564    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.007064    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.525564    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.992511    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.525564    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:48 old-k8s-version-013200 kubelet[1888]: E1028 12:32:48.993292    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.525564    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:51 old-k8s-version-013200 kubelet[1888]: E1028 12:32:51.994731    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.526513    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.526513    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.526513    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1028 12:33:15.526513    4716 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:33:15.526513    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:33:15.752348    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:15.752348    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 12:33:15.752896    4716 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1028 12:33:15.752896    4716 out.go:270]   Oct 28 12:32:48 old-k8s-version-013200 kubelet[1888]: E1028 12:32:48.993292    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:32:48 old-k8s-version-013200 kubelet[1888]: E1028 12:32:48.993292    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.753018    4716 out.go:270]   Oct 28 12:32:51 old-k8s-version-013200 kubelet[1888]: E1028 12:32:51.994731    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:32:51 old-k8s-version-013200 kubelet[1888]: E1028 12:32:51.994731    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.753018    4716 out.go:270]   Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.753018    4716 out.go:270]   Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.753018    4716 out.go:270]   Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1028 12:33:15.753018    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:15.753018    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:25.768536    4716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:33:25.798507    4716 api_server.go:72] duration metric: took 5m50.3897517s to wait for apiserver process to appear ...
	I1028 12:33:25.798507    4716 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:33:25.807542    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:33:25.857109    4716 logs.go:282] 2 containers: [17aaa55a6fdb 49e5e06a6361]
	I1028 12:33:25.866122    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:33:25.922322    4716 logs.go:282] 2 containers: [642d04757828 cc895678b294]
	I1028 12:33:25.935312    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:33:25.982483    4716 logs.go:282] 2 containers: [b380cacb66c6 4e391fcae110]
	I1028 12:33:25.996437    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:33:26.041627    4716 logs.go:282] 2 containers: [8379b070c9db 9ce4d10d6386]
	I1028 12:33:26.050629    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:33:26.098908    4716 logs.go:282] 2 containers: [0a1c612f812e ee68d9004e36]
	I1028 12:33:26.106901    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:33:26.155220    4716 logs.go:282] 2 containers: [84a52451395b 1a4a898cd699]
	I1028 12:33:26.168218    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:33:26.214235    4716 logs.go:282] 0 containers: []
	W1028 12:33:26.214235    4716 logs.go:284] No container was found matching "kindnet"
	I1028 12:33:26.224231    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:33:26.265220    4716 logs.go:282] 1 containers: [7f41acfe30e7]
	I1028 12:33:26.277229    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:33:26.329423    4716 logs.go:282] 2 containers: [7fe2f6b267f7 befcb830733f]
	I1028 12:33:26.329423    4716 logs.go:123] Gathering logs for kube-scheduler [9ce4d10d6386] ...
	I1028 12:33:26.329423    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce4d10d6386"
	I1028 12:33:26.392436    4716 logs.go:123] Gathering logs for kube-controller-manager [84a52451395b] ...
	I1028 12:33:26.392436    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a52451395b"
	I1028 12:33:26.471265    4716 logs.go:123] Gathering logs for storage-provisioner [befcb830733f] ...
	I1028 12:33:26.472279    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 befcb830733f"
	I1028 12:33:26.523487    4716 logs.go:123] Gathering logs for container status ...
	I1028 12:33:26.523487    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:33:26.627567    4716 logs.go:123] Gathering logs for etcd [642d04757828] ...
	I1028 12:33:26.627567    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642d04757828"
	I1028 12:33:26.703581    4716 logs.go:123] Gathering logs for coredns [4e391fcae110] ...
	I1028 12:33:26.703581    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e391fcae110"
	I1028 12:33:26.761576    4716 logs.go:123] Gathering logs for kube-controller-manager [1a4a898cd699] ...
	I1028 12:33:26.761576    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a4a898cd699"
	I1028 12:33:26.850582    4716 logs.go:123] Gathering logs for kubernetes-dashboard [7f41acfe30e7] ...
	I1028 12:33:26.850582    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f41acfe30e7"
	I1028 12:33:26.939799    4716 logs.go:123] Gathering logs for kube-apiserver [49e5e06a6361] ...
	I1028 12:33:26.939799    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49e5e06a6361"
	I1028 12:33:27.040996    4716 logs.go:123] Gathering logs for kube-proxy [ee68d9004e36] ...
	I1028 12:33:27.040996    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee68d9004e36"
	I1028 12:33:27.090255    4716 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:33:27.090255    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:33:27.330970    4716 logs.go:123] Gathering logs for kube-proxy [0a1c612f812e] ...
	I1028 12:33:27.330970    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a1c612f812e"
	I1028 12:33:27.389529    4716 logs.go:123] Gathering logs for Docker ...
	I1028 12:33:27.389529    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:33:27.446527    4716 logs.go:123] Gathering logs for kubelet ...
	I1028 12:33:27.446527    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:33:27.542005    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:54 old-k8s-version-013200 kubelet[1888]: E1028 12:27:54.029375    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.543013    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:55 old-k8s-version-013200 kubelet[1888]: E1028 12:27:55.942392    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.544004    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:57 old-k8s-version-013200 kubelet[1888]: E1028 12:27:57.107195    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.547018    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:11 old-k8s-version-013200 kubelet[1888]: E1028 12:28:11.122198    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.550023    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:13 old-k8s-version-013200 kubelet[1888]: E1028 12:28:13.332946    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.551015    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:13 old-k8s-version-013200 kubelet[1888]: E1028 12:28:13.846655    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.551015    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:14 old-k8s-version-013200 kubelet[1888]: E1028 12:28:14.881950    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.552018    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:16 old-k8s-version-013200 kubelet[1888]: E1028 12:28:16.965583    1888 pod_workers.go:191] Error syncing pod 34dc73e1-5d6a-469b-90d3-812ffa9e7fe0 ("storage-provisioner_kube-system(34dc73e1-5d6a-469b-90d3-812ffa9e7fe0)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(34dc73e1-5d6a-469b-90d3-812ffa9e7fe0)"
	W1028 12:33:27.552018    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:23 old-k8s-version-013200 kubelet[1888]: E1028 12:28:23.025449    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.559024    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:35 old-k8s-version-013200 kubelet[1888]: E1028 12:28:35.546175    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.565028    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:38 old-k8s-version-013200 kubelet[1888]: E1028 12:28:38.073497    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.565028    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:50 old-k8s-version-013200 kubelet[1888]: E1028 12:28:50.021427    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.566028    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:51 old-k8s-version-013200 kubelet[1888]: E1028 12:28:51.020441    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.566028    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:02 old-k8s-version-013200 kubelet[1888]: E1028 12:29:02.017808    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.570020    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:03 old-k8s-version-013200 kubelet[1888]: E1028 12:29:03.445268    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.570020    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:15 old-k8s-version-013200 kubelet[1888]: E1028 12:29:15.015992    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.571028    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:17 old-k8s-version-013200 kubelet[1888]: E1028 12:29:17.032348    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.574011    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:27 old-k8s-version-013200 kubelet[1888]: E1028 12:29:27.070258    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.574011    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:33 old-k8s-version-013200 kubelet[1888]: E1028 12:29:33.013280    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.574011    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:42 old-k8s-version-013200 kubelet[1888]: E1028 12:29:42.013804    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:44 old-k8s-version-013200 kubelet[1888]: E1028 12:29:44.435765    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:53 old-k8s-version-013200 kubelet[1888]: E1028 12:29:53.013992    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:59 old-k8s-version-013200 kubelet[1888]: E1028 12:29:59.014335    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:06 old-k8s-version-013200 kubelet[1888]: E1028 12:30:06.009919    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:11 old-k8s-version-013200 kubelet[1888]: E1028 12:30:11.010136    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577999    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:18 old-k8s-version-013200 kubelet[1888]: E1028 12:30:18.010397    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577999    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:23 old-k8s-version-013200 kubelet[1888]: E1028 12:30:23.025022    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577999    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:33 old-k8s-version-013200 kubelet[1888]: E1028 12:30:33.007126    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577999    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:38 old-k8s-version-013200 kubelet[1888]: E1028 12:30:38.006958    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.578997    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:46 old-k8s-version-013200 kubelet[1888]: E1028 12:30:46.005686    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.578997    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:53 old-k8s-version-013200 kubelet[1888]: E1028 12:30:53.006426    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.581005    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:00 old-k8s-version-013200 kubelet[1888]: E1028 12:31:00.054849    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.582998    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:06 old-k8s-version-013200 kubelet[1888]: E1028 12:31:06.472718    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.582998    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:13 old-k8s-version-013200 kubelet[1888]: E1028 12:31:13.003165    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.584004    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:18 old-k8s-version-013200 kubelet[1888]: E1028 12:31:18.003362    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.584004    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:24 old-k8s-version-013200 kubelet[1888]: E1028 12:31:24.003108    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.584004    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:30 old-k8s-version-013200 kubelet[1888]: E1028 12:31:30.016995    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.585006    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:38 old-k8s-version-013200 kubelet[1888]: E1028 12:31:38.998959    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.585006    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:43 old-k8s-version-013200 kubelet[1888]: E1028 12:31:43.999179    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.585006    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:52 old-k8s-version-013200 kubelet[1888]: E1028 12:31:52.999572    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.586005    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:58 old-k8s-version-013200 kubelet[1888]: E1028 12:31:58.999520    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.586005    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:08 old-k8s-version-013200 kubelet[1888]: E1028 12:32:08.001604    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.586005    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:10 old-k8s-version-013200 kubelet[1888]: E1028 12:32:10.996463    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.586005    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:22 old-k8s-version-013200 kubelet[1888]: E1028 12:32:22.996367    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.587006    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:23 old-k8s-version-013200 kubelet[1888]: E1028 12:32:23.996397    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.587006    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.007064    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.992511    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:48 old-k8s-version-013200 kubelet[1888]: E1028 12:32:48.993292    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:51 old-k8s-version-013200 kubelet[1888]: E1028 12:32:51.994731    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.589003    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.589003    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:16 old-k8s-version-013200 kubelet[1888]: E1028 12:33:16.990577    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.589003    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:23 old-k8s-version-013200 kubelet[1888]: E1028 12:33:23.991584    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1028 12:33:27.589003    4716 logs.go:123] Gathering logs for dmesg ...
	I1028 12:33:27.589003    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:33:27.621013    4716 logs.go:123] Gathering logs for coredns [b380cacb66c6] ...
	I1028 12:33:27.621013    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b380cacb66c6"
	I1028 12:33:27.676401    4716 logs.go:123] Gathering logs for kube-scheduler [8379b070c9db] ...
	I1028 12:33:27.676401    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8379b070c9db"
	I1028 12:33:27.742685    4716 logs.go:123] Gathering logs for storage-provisioner [7fe2f6b267f7] ...
	I1028 12:33:27.742685    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe2f6b267f7"
	I1028 12:33:27.808905    4716 logs.go:123] Gathering logs for kube-apiserver [17aaa55a6fdb] ...
	I1028 12:33:27.808905    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17aaa55a6fdb"
	I1028 12:33:27.883904    4716 logs.go:123] Gathering logs for etcd [cc895678b294] ...
	I1028 12:33:27.883904    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc895678b294"
	I1028 12:33:27.964902    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:27.964902    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 12:33:27.964902    4716 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:16 old-k8s-version-013200 kubelet[1888]: E1028 12:33:16.990577    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:33:16 old-k8s-version-013200 kubelet[1888]: E1028 12:33:16.990577    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:23 old-k8s-version-013200 kubelet[1888]: E1028 12:33:23.991584    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 28 12:33:23 old-k8s-version-013200 kubelet[1888]: E1028 12:33:23.991584    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1028 12:33:27.964902    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:27.964902    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:37.966552    4716 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65120/healthz ...
	I1028 12:33:37.996926    4716 api_server.go:279] https://127.0.0.1:65120/healthz returned 200:
	ok
	I1028 12:33:38.005912    4716 out.go:201] 
	W1028 12:33:38.009398    4716 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1028 12:33:38.009521    4716 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1028 12:33:38.009552    4716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1028 12:33:38.009643    4716 out.go:270] * 
	* 
	W1028 12:33:38.010818    4716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:33:38.016384    4716 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-013200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-013200
helpers_test.go:235: (dbg) docker inspect old-k8s-version-013200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a10c58a6b61c43ae06684465f7aa243c18775beded9f8d6e6291b85c62c8f2d",
	        "Created": "2024-10-28T12:23:15.551667991Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342522,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-28T12:26:57.586010011Z",
	            "FinishedAt": "2024-10-28T12:26:54.298567837Z"
	        },
	        "Image": "sha256:05bcd996665116a573f1bc98d7e2b0a5da287feef26d621bbd294f87ee72c630",
	        "ResolvConfPath": "/var/lib/docker/containers/3a10c58a6b61c43ae06684465f7aa243c18775beded9f8d6e6291b85c62c8f2d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a10c58a6b61c43ae06684465f7aa243c18775beded9f8d6e6291b85c62c8f2d/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a10c58a6b61c43ae06684465f7aa243c18775beded9f8d6e6291b85c62c8f2d/hosts",
	        "LogPath": "/var/lib/docker/containers/3a10c58a6b61c43ae06684465f7aa243c18775beded9f8d6e6291b85c62c8f2d/3a10c58a6b61c43ae06684465f7aa243c18775beded9f8d6e6291b85c62c8f2d-json.log",
	        "Name": "/old-k8s-version-013200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-013200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-013200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/df4ea2e0103dcb5e1148e4f6a60cc94b2158fe7ba074a91a27a3ecfa287d11b1-init/diff:/var/lib/docker/overlay2/56549ac06c27a2316e9ca3114510d52d2c5e1a27f1ba14da0e1cd8dee84d22ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df4ea2e0103dcb5e1148e4f6a60cc94b2158fe7ba074a91a27a3ecfa287d11b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df4ea2e0103dcb5e1148e4f6a60cc94b2158fe7ba074a91a27a3ecfa287d11b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df4ea2e0103dcb5e1148e4f6a60cc94b2158fe7ba074a91a27a3ecfa287d11b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-013200",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-013200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-013200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-013200",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-013200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d3fd0c4c6489ffd5e2ff0058de8425aabea7301cfc61677fce7f1f9ee0beae9",
	            "SandboxKey": "/var/run/docker/netns/8d3fd0c4c648",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65116"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65117"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65119"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65120"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-013200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2a91bcb5f9eace016deacdf1b5677212d3487bd88944c4c5e6a2756b01cf4924",
	                    "EndpointID": "b72757adc5ad3abc11a6bbbf432343ecb6c95b914d28259b897f297b71c2c4df",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-013200",
	                        "3a10c58a6b61"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-013200 -n old-k8s-version-013200
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-013200 logs -n 25
E1028 12:33:40.880947   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-013200 logs -n 25: (2.9672323s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-232900                                  | embed-certs-232900           | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:27 UTC | 28 Oct 24 12:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473100  | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:27 UTC | 28 Oct 24 12:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-232900                 | embed-certs-232900           | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:28 UTC | 28 Oct 24 12:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p embed-certs-232900                                  | embed-certs-232900           | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:28 UTC | 28 Oct 24 12:32 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |                   |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:28 UTC | 28 Oct 24 12:28 UTC |
	|         | default-k8s-diff-port-473100                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473100       | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:28 UTC | 28 Oct 24 12:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:28 UTC | 28 Oct 24 12:33 UTC |
	|         | default-k8s-diff-port-473100                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |                   |         |                     |                     |
	| image   | no-preload-889700 image list                           | no-preload-889700            | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC | 28 Oct 24 12:32 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-889700                                   | no-preload-889700            | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC | 28 Oct 24 12:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p no-preload-889700                                   | no-preload-889700            | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC | 28 Oct 24 12:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p no-preload-889700                                   | no-preload-889700            | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC | 28 Oct 24 12:32 UTC |
	| delete  | -p no-preload-889700                                   | no-preload-889700            | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC | 28 Oct 24 12:32 UTC |
	| start   | -p newest-cni-177500 --memory=2200 --alsologtostderr   | newest-cni-177500            | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:32 UTC | 28 Oct 24 12:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.31.2           |                              |                   |         |                     |                     |
	| image   | embed-certs-232900 image list                          | embed-certs-232900           | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-232900                                  | embed-certs-232900           | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-232900                                  | embed-certs-232900           | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-232900                                  | embed-certs-232900           | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	| delete  | -p embed-certs-232900                                  | embed-certs-232900           | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	| image   | default-k8s-diff-port-473100                           | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | default-k8s-diff-port-473100                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | default-k8s-diff-port-473100                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | default-k8s-diff-port-473100                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-177500             | newest-cni-177500            | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-473100 | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC | 28 Oct 24 12:33 UTC |
	|         | default-k8s-diff-port-473100                           |                              |                   |         |                     |                     |
	| stop    | -p newest-cni-177500                                   | newest-cni-177500            | minikube4\jenkins | v1.34.0 | 28 Oct 24 12:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:32:20
	Running on machine: minikube4
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:32:20.272064   10000 out.go:345] Setting OutFile to fd 776 ...
	I1028 12:32:20.349002   10000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:32:20.349002   10000 out.go:358] Setting ErrFile to fd 1596...
	I1028 12:32:20.349002   10000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:32:20.371284   10000 out.go:352] Setting JSON to false
	I1028 12:32:20.375980   10000 start.go:129] hostinfo: {"hostname":"minikube4","uptime":5836,"bootTime":1730112903,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5073 Build 19045.5073","kernelVersion":"10.0.19045.5073 Build 19045.5073","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1028 12:32:20.375980   10000 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 12:32:20.383565   10000 out.go:177] * [newest-cni-177500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	I1028 12:32:20.388372   10000 notify.go:220] Checking for updates...
	I1028 12:32:20.397309   10000 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 12:32:20.408289   10000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:32:20.414362   10000 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1028 12:32:20.419859   10000 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 12:32:20.425556   10000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:32:17.120589   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:19.624226   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:16.542990    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:18.546065    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:20.547136    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:20.429551   10000 config.go:182] Loaded profile config "default-k8s-diff-port-473100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:32:20.430553   10000 config.go:182] Loaded profile config "embed-certs-232900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:32:20.430553   10000 config.go:182] Loaded profile config "old-k8s-version-013200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1028 12:32:20.431539   10000 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:32:20.651803   10000 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.2 (167172)
	I1028 12:32:20.660787   10000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 12:32:20.990727   10000 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:92 SystemTime:2024-10-28 12:32:20.958720133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 12:32:20.994739   10000 out.go:177] * Using the docker driver based on user configuration
	I1028 12:32:20.997722   10000 start.go:297] selected driver: docker
	I1028 12:32:20.997722   10000 start.go:901] validating driver "docker" against <nil>
	I1028 12:32:20.997722   10000 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:32:21.135032   10000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 12:32:21.479607   10000 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:92 SystemTime:2024-10-28 12:32:21.457762427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 12:32:21.479607   10000 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1028 12:32:21.479607   10000 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1028 12:32:21.481594   10000 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1028 12:32:21.491587   10000 out.go:177] * Using Docker Desktop driver with root privileges
	I1028 12:32:21.496182   10000 cni.go:84] Creating CNI manager for ""
	I1028 12:32:21.496182   10000 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 12:32:21.496182   10000 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 12:32:21.496182   10000 start.go:340] cluster config:
	{Name:newest-cni-177500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-177500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:32:21.502521   10000 out.go:177] * Starting "newest-cni-177500" primary control-plane node in "newest-cni-177500" cluster
	I1028 12:32:21.508008   10000 cache.go:121] Beginning downloading kic base image for docker with docker
	I1028 12:32:21.513465   10000 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1028 12:32:21.517711   10000 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:32:21.517711   10000 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 12:32:21.518473   10000 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 12:32:21.518473   10000 cache.go:56] Caching tarball of preloaded images
	I1028 12:32:21.518702   10000 preload.go:172] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1028 12:32:21.518702   10000 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 12:32:21.518702   10000 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\config.json ...
	I1028 12:32:21.518702   10000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\config.json: {Name:mk5ce6dfdf37b715343776d679f23578232c3368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:32:21.637384   10000 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
	I1028 12:32:21.637384   10000 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
	I1028 12:32:21.637384   10000 cache.go:194] Successfully downloaded all kic artifacts
	I1028 12:32:21.637384   10000 start.go:360] acquireMachinesLock for newest-cni-177500: {Name:mk6f2b06f43ea4982bb255391205c66930394f86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:32:21.637384   10000 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-177500"
	I1028 12:32:21.637384   10000 start.go:93] Provisioning new machine with config: &{Name:newest-cni-177500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-177500 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 12:32:21.638380   10000 start.go:125] createHost starting for "" (driver="docker")
	I1028 12:32:17.648831    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:19.654597    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:21.643364   10000 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1028 12:32:21.643364   10000 start.go:159] libmachine.API.Create for "newest-cni-177500" (driver="docker")
	I1028 12:32:21.643364   10000 client.go:168] LocalClient.Create starting
	I1028 12:32:21.644364   10000 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1028 12:32:21.644364   10000 main.go:141] libmachine: Decoding PEM data...
	I1028 12:32:21.644364   10000 main.go:141] libmachine: Parsing certificate...
	I1028 12:32:21.644364   10000 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1028 12:32:21.644364   10000 main.go:141] libmachine: Decoding PEM data...
	I1028 12:32:21.644364   10000 main.go:141] libmachine: Parsing certificate...
	I1028 12:32:21.655366   10000 cli_runner.go:164] Run: docker network inspect newest-cni-177500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1028 12:32:21.746678   10000 cli_runner.go:211] docker network inspect newest-cni-177500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1028 12:32:21.754683   10000 network_create.go:284] running [docker network inspect newest-cni-177500] to gather additional debugging logs...
	I1028 12:32:21.754683   10000 cli_runner.go:164] Run: docker network inspect newest-cni-177500
	W1028 12:32:21.825246   10000 cli_runner.go:211] docker network inspect newest-cni-177500 returned with exit code 1
	I1028 12:32:21.825246   10000 network_create.go:287] error running [docker network inspect newest-cni-177500]: docker network inspect newest-cni-177500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-177500 not found
	I1028 12:32:21.825246   10000 network_create.go:289] output of [docker network inspect newest-cni-177500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-177500 not found
	
	** /stderr **
	I1028 12:32:21.837091   10000 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1028 12:32:21.927411   10000 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1028 12:32:21.958346   10000 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1028 12:32:21.979536   10000 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e67290}
	I1028 12:32:21.979536   10000 network_create.go:124] attempt to create docker network newest-cni-177500 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1028 12:32:21.989544   10000 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-177500 newest-cni-177500
	W1028 12:32:22.062013   10000 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-177500 newest-cni-177500 returned with exit code 1
	W1028 12:32:22.062013   10000 network_create.go:149] failed to create docker network newest-cni-177500 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-177500 newest-cni-177500: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1028 12:32:22.062013   10000 network_create.go:116] failed to create docker network newest-cni-177500 192.168.67.0/24, will retry: subnet is taken
	I1028 12:32:22.088586   10000 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1028 12:32:22.109998   10000 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018366f0}
	I1028 12:32:22.109998   10000 network_create.go:124] attempt to create docker network newest-cni-177500 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1028 12:32:22.120002   10000 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-177500 newest-cni-177500
	I1028 12:32:22.321976   10000 network_create.go:108] docker network newest-cni-177500 192.168.76.0/24 created
	I1028 12:32:22.322500   10000 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-177500" container
	I1028 12:32:22.341668   10000 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1028 12:32:22.438121   10000 cli_runner.go:164] Run: docker volume create newest-cni-177500 --label name.minikube.sigs.k8s.io=newest-cni-177500 --label created_by.minikube.sigs.k8s.io=true
	I1028 12:32:22.705069   10000 oci.go:103] Successfully created a docker volume newest-cni-177500
	I1028 12:32:22.715502   10000 cli_runner.go:164] Run: docker run --rm --name newest-cni-177500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-177500 --entrypoint /usr/bin/test -v newest-cni-177500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib
	I1028 12:32:22.113998   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:24.191325   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:23.043206    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:25.047067    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:22.170018    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:24.648328    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:26.651005    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:25.536319   10000 cli_runner.go:217] Completed: docker run --rm --name newest-cni-177500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-177500 --entrypoint /usr/bin/test -v newest-cni-177500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib: (2.8207012s)
	I1028 12:32:25.536319   10000 oci.go:107] Successfully prepared a docker volume newest-cni-177500
	I1028 12:32:25.536319   10000 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:32:25.537469   10000 kic.go:194] Starting extracting preloaded images to volume ...
	I1028 12:32:25.546308   10000 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-177500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir
	I1028 12:32:26.616740   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:29.116092   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:27.544836    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:30.044949    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:29.148773    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:31.150788    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:31.117725   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:33.615294   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:32.051950    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:34.544412    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:33.152052    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:35.649933    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:36.118961   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:38.618895   16056 pod_ready.go:103] pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:39.600619   16056 pod_ready.go:82] duration metric: took 4m0.000472s for pod "metrics-server-6867b74b74-4cl9p" in "kube-system" namespace to be "Ready" ...
	E1028 12:32:39.600619   16056 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:32:39.600619   16056 pod_ready.go:39] duration metric: took 4m11.4863766s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:32:39.600619   16056 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:32:39.610791   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:32:39.662002   16056 logs.go:282] 2 containers: [e1043570e5cb 1cde68c8fd06]
	I1028 12:32:39.674606   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:32:39.725610   16056 logs.go:282] 2 containers: [044f9fa0835b 27bd77256ddc]
	I1028 12:32:39.734608   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:32:39.776021   16056 logs.go:282] 2 containers: [dbb6281bfa51 a63f9ba773ba]
	I1028 12:32:39.786636   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:32:39.831135   16056 logs.go:282] 2 containers: [63b7c0ddc6c8 3202c80ad681]
	I1028 12:32:39.840128   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:32:39.884584   16056 logs.go:282] 2 containers: [88b48a49d7e7 163280a33df4]
	I1028 12:32:39.893335   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:32:39.939973   16056 logs.go:282] 2 containers: [9bca8071a731 9227e85a197e]
	I1028 12:32:39.951682   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:32:39.999267   16056 logs.go:282] 0 containers: []
	W1028 12:32:39.999267   16056 logs.go:284] No container was found matching "kindnet"
	I1028 12:32:40.011260   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:32:40.085698   16056 logs.go:282] 1 containers: [fabd96338d86]
	I1028 12:32:40.094323   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:32:40.139030   16056 logs.go:282] 2 containers: [63ab3547f8d3 a00137fe2377]
	I1028 12:32:40.139030   16056 logs.go:123] Gathering logs for kube-proxy [163280a33df4] ...
	I1028 12:32:40.139030   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163280a33df4"
	I1028 12:32:40.196424   16056 logs.go:123] Gathering logs for Docker ...
	I1028 12:32:40.196610   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:32:40.244872   16056 logs.go:123] Gathering logs for etcd [044f9fa0835b] ...
	I1028 12:32:40.244872   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 044f9fa0835b"
	I1028 12:32:40.346048   16056 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:32:40.346048   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:32:40.593711   16056 logs.go:123] Gathering logs for kube-apiserver [e1043570e5cb] ...
	I1028 12:32:40.593711   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1043570e5cb"
	I1028 12:32:40.663899   16056 logs.go:123] Gathering logs for coredns [dbb6281bfa51] ...
	I1028 12:32:40.663899   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb6281bfa51"
	I1028 12:32:40.709436   16056 logs.go:123] Gathering logs for coredns [a63f9ba773ba] ...
	I1028 12:32:40.709436   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a63f9ba773ba"
	I1028 12:32:40.754472   16056 logs.go:123] Gathering logs for kube-scheduler [63b7c0ddc6c8] ...
	I1028 12:32:40.755448   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63b7c0ddc6c8"
	I1028 12:32:40.805420   16056 logs.go:123] Gathering logs for kube-scheduler [3202c80ad681] ...
	I1028 12:32:40.805497   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3202c80ad681"
	I1028 12:32:36.545218    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:38.546339    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:38.149442    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:40.157875    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:40.874290   16056 logs.go:123] Gathering logs for kube-proxy [88b48a49d7e7] ...
	I1028 12:32:40.874290   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b48a49d7e7"
	I1028 12:32:40.922510   16056 logs.go:123] Gathering logs for kube-controller-manager [9bca8071a731] ...
	I1028 12:32:40.922510   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bca8071a731"
	I1028 12:32:40.997286   16056 logs.go:123] Gathering logs for dmesg ...
	I1028 12:32:40.997286   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:32:41.026314   16056 logs.go:123] Gathering logs for kubernetes-dashboard [fabd96338d86] ...
	I1028 12:32:41.026314   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fabd96338d86"
	I1028 12:32:41.073357   16056 logs.go:123] Gathering logs for storage-provisioner [63ab3547f8d3] ...
	I1028 12:32:41.073357   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ab3547f8d3"
	I1028 12:32:41.117319   16056 logs.go:123] Gathering logs for container status ...
	I1028 12:32:41.117319   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:32:41.293073   16056 logs.go:123] Gathering logs for kube-controller-manager [9227e85a197e] ...
	I1028 12:32:41.293132   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9227e85a197e"
	I1028 12:32:41.362114   16056 logs.go:123] Gathering logs for kube-apiserver [1cde68c8fd06] ...
	I1028 12:32:41.362114   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cde68c8fd06"
	I1028 12:32:41.464288   16056 logs.go:123] Gathering logs for etcd [27bd77256ddc] ...
	I1028 12:32:41.464288   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bd77256ddc"
	I1028 12:32:41.558980   16056 logs.go:123] Gathering logs for storage-provisioner [a00137fe2377] ...
	I1028 12:32:41.559989   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00137fe2377"
	I1028 12:32:41.610438   16056 logs.go:123] Gathering logs for kubelet ...
	I1028 12:32:41.611006   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:32:44.239150   16056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:32:44.266767   16056 api_server.go:72] duration metric: took 4m23.7360499s to wait for apiserver process to appear ...
	I1028 12:32:44.266880   16056 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:32:44.276686   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:32:44.327638   16056 logs.go:282] 2 containers: [e1043570e5cb 1cde68c8fd06]
	I1028 12:32:44.335626   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:32:44.373629   16056 logs.go:282] 2 containers: [044f9fa0835b 27bd77256ddc]
	I1028 12:32:44.385849   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:32:44.431515   16056 logs.go:282] 2 containers: [dbb6281bfa51 a63f9ba773ba]
	I1028 12:32:44.441600   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:32:44.484341   16056 logs.go:282] 2 containers: [63b7c0ddc6c8 3202c80ad681]
	I1028 12:32:44.495199   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:32:44.538200   16056 logs.go:282] 2 containers: [88b48a49d7e7 163280a33df4]
	I1028 12:32:44.550256   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:32:44.598486   16056 logs.go:282] 2 containers: [9bca8071a731 9227e85a197e]
	I1028 12:32:44.616389   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:32:44.667727   16056 logs.go:282] 0 containers: []
	W1028 12:32:44.667727   16056 logs.go:284] No container was found matching "kindnet"
	I1028 12:32:44.678707   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:32:44.726554   16056 logs.go:282] 2 containers: [63ab3547f8d3 a00137fe2377]
	I1028 12:32:44.739162   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:32:44.794426   16056 logs.go:282] 1 containers: [fabd96338d86]
	I1028 12:32:44.794499   16056 logs.go:123] Gathering logs for dmesg ...
	I1028 12:32:44.794499   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:32:44.833188   16056 logs.go:123] Gathering logs for kube-apiserver [1cde68c8fd06] ...
	I1028 12:32:44.833188   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cde68c8fd06"
	I1028 12:32:44.914819   16056 logs.go:123] Gathering logs for coredns [a63f9ba773ba] ...
	I1028 12:32:44.914819   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a63f9ba773ba"
	I1028 12:32:44.973338   16056 logs.go:123] Gathering logs for kube-scheduler [63b7c0ddc6c8] ...
	I1028 12:32:44.973433   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63b7c0ddc6c8"
	I1028 12:32:45.019868   16056 logs.go:123] Gathering logs for kube-controller-manager [9bca8071a731] ...
	I1028 12:32:45.019868   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bca8071a731"
	I1028 12:32:45.089675   16056 logs.go:123] Gathering logs for storage-provisioner [63ab3547f8d3] ...
	I1028 12:32:45.089675   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ab3547f8d3"
	I1028 12:32:45.145863   16056 logs.go:123] Gathering logs for storage-provisioner [a00137fe2377] ...
	I1028 12:32:45.145863   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00137fe2377"
	I1028 12:32:45.192815   16056 logs.go:123] Gathering logs for kubelet ...
	I1028 12:32:45.192868   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:32:45.322235   16056 logs.go:123] Gathering logs for etcd [044f9fa0835b] ...
	I1028 12:32:45.322235   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 044f9fa0835b"
	I1028 12:32:45.437230   16056 logs.go:123] Gathering logs for coredns [dbb6281bfa51] ...
	I1028 12:32:45.437230   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb6281bfa51"
	I1028 12:32:45.493424   16056 logs.go:123] Gathering logs for kube-proxy [88b48a49d7e7] ...
	I1028 12:32:45.493478   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b48a49d7e7"
	I1028 12:32:45.547990   16056 logs.go:123] Gathering logs for kube-controller-manager [9227e85a197e] ...
	I1028 12:32:45.547990   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9227e85a197e"
	I1028 12:32:45.617705   16056 logs.go:123] Gathering logs for etcd [27bd77256ddc] ...
	I1028 12:32:45.617705   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bd77256ddc"
	I1028 12:32:45.704161   16056 logs.go:123] Gathering logs for kube-scheduler [3202c80ad681] ...
	I1028 12:32:45.704161   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3202c80ad681"
	I1028 12:32:45.763484   16056 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:32:45.763484   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:32:41.045343    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:43.895556    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:42.321649    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:44.650048    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:48.942451   10000 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-177500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir: (23.3951811s)
	I1028 12:32:48.943004   10000 kic.go:203] duration metric: took 23.4045722s to extract preloaded images to volume ...
	I1028 12:32:48.952594   10000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 12:32:49.348695   10000 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:true NGoroutines:92 SystemTime:2024-10-28 12:32:49.314141559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 12:32:49.358697   10000 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1028 12:32:49.737462   10000 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-177500 --name newest-cni-177500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-177500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-177500 --network newest-cni-177500 --ip 192.168.76.2 --volume newest-cni-177500:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e
	I1028 12:32:46.107315   16056 logs.go:123] Gathering logs for kube-apiserver [e1043570e5cb] ...
	I1028 12:32:46.107382   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1043570e5cb"
	I1028 12:32:46.165451   16056 logs.go:123] Gathering logs for kube-proxy [163280a33df4] ...
	I1028 12:32:46.165451   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163280a33df4"
	I1028 12:32:46.217215   16056 logs.go:123] Gathering logs for kubernetes-dashboard [fabd96338d86] ...
	I1028 12:32:46.217321   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fabd96338d86"
	I1028 12:32:46.273170   16056 logs.go:123] Gathering logs for Docker ...
	I1028 12:32:46.273170   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:32:46.328715   16056 logs.go:123] Gathering logs for container status ...
	I1028 12:32:46.328715   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:32:48.936703   16056 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65287/healthz ...
	I1028 12:32:48.954836   16056 api_server.go:279] https://127.0.0.1:65287/healthz returned 200:
	ok
	I1028 12:32:48.960188   16056 api_server.go:141] control plane version: v1.31.2
	I1028 12:32:48.960263   16056 api_server.go:131] duration metric: took 4.6931523s to wait for apiserver health ...
	I1028 12:32:48.960263   16056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:32:48.970098   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:32:49.039387   16056 logs.go:282] 2 containers: [e1043570e5cb 1cde68c8fd06]
	I1028 12:32:49.069717   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:32:49.155622   16056 logs.go:282] 2 containers: [044f9fa0835b 27bd77256ddc]
	I1028 12:32:49.166632   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:32:49.218627   16056 logs.go:282] 2 containers: [dbb6281bfa51 a63f9ba773ba]
	I1028 12:32:49.227639   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:32:49.276638   16056 logs.go:282] 2 containers: [63b7c0ddc6c8 3202c80ad681]
	I1028 12:32:49.289623   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:32:49.350697   16056 logs.go:282] 2 containers: [88b48a49d7e7 163280a33df4]
	I1028 12:32:49.359697   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:32:49.404711   16056 logs.go:282] 2 containers: [9bca8071a731 9227e85a197e]
	I1028 12:32:49.419716   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:32:49.571629   16056 logs.go:282] 0 containers: []
	W1028 12:32:49.571629   16056 logs.go:284] No container was found matching "kindnet"
	I1028 12:32:49.581640   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:32:49.627149   16056 logs.go:282] 2 containers: [63ab3547f8d3 a00137fe2377]
	I1028 12:32:49.639332   16056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:32:49.686017   16056 logs.go:282] 1 containers: [fabd96338d86]
	I1028 12:32:49.686121   16056 logs.go:123] Gathering logs for coredns [dbb6281bfa51] ...
	I1028 12:32:49.686121   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb6281bfa51"
	I1028 12:32:49.743459   16056 logs.go:123] Gathering logs for kube-proxy [88b48a49d7e7] ...
	I1028 12:32:49.743459   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b48a49d7e7"
	I1028 12:32:49.790463   16056 logs.go:123] Gathering logs for kube-controller-manager [9bca8071a731] ...
	I1028 12:32:49.791460   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bca8071a731"
	I1028 12:32:49.882034   16056 logs.go:123] Gathering logs for storage-provisioner [63ab3547f8d3] ...
	I1028 12:32:49.882034   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ab3547f8d3"
	I1028 12:32:49.924183   16056 logs.go:123] Gathering logs for kubernetes-dashboard [fabd96338d86] ...
	I1028 12:32:49.924183   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fabd96338d86"
	I1028 12:32:49.983169   16056 logs.go:123] Gathering logs for Docker ...
	I1028 12:32:49.983169   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:32:50.034773   16056 logs.go:123] Gathering logs for container status ...
	I1028 12:32:50.034773   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:32:50.137743   16056 logs.go:123] Gathering logs for dmesg ...
	I1028 12:32:50.137743   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:32:50.170749   16056 logs.go:123] Gathering logs for storage-provisioner [a00137fe2377] ...
	I1028 12:32:50.171747   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00137fe2377"
	I1028 12:32:50.221863   16056 logs.go:123] Gathering logs for coredns [a63f9ba773ba] ...
	I1028 12:32:50.222871   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a63f9ba773ba"
	I1028 12:32:50.282522   16056 logs.go:123] Gathering logs for etcd [27bd77256ddc] ...
	I1028 12:32:50.282522   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27bd77256ddc"
	I1028 12:32:50.362731   16056 logs.go:123] Gathering logs for kube-proxy [163280a33df4] ...
	I1028 12:32:50.362731   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163280a33df4"
	I1028 12:32:50.417058   16056 logs.go:123] Gathering logs for kube-controller-manager [9227e85a197e] ...
	I1028 12:32:50.418423   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9227e85a197e"
	I1028 12:32:50.526950   16056 logs.go:123] Gathering logs for etcd [044f9fa0835b] ...
	I1028 12:32:50.526950   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 044f9fa0835b"
	I1028 12:32:50.679467   16056 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:32:50.679467   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:32:46.050221    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:48.052091    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:50.557193    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:47.017443    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:49.167625    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:51.651457    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:50.872502   16056 logs.go:123] Gathering logs for kube-apiserver [e1043570e5cb] ...
	I1028 12:32:50.872502   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1043570e5cb"
	I1028 12:32:50.936492   16056 logs.go:123] Gathering logs for kube-apiserver [1cde68c8fd06] ...
	I1028 12:32:50.936492   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cde68c8fd06"
	I1028 12:32:51.026484   16056 logs.go:123] Gathering logs for kube-scheduler [63b7c0ddc6c8] ...
	I1028 12:32:51.026484   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63b7c0ddc6c8"
	I1028 12:32:51.078933   16056 logs.go:123] Gathering logs for kube-scheduler [3202c80ad681] ...
	I1028 12:32:51.078990   16056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3202c80ad681"
	I1028 12:32:51.140945   16056 logs.go:123] Gathering logs for kubelet ...
	I1028 12:32:51.140945   16056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:32:53.792658   16056 system_pods.go:59] 8 kube-system pods found
	I1028 12:32:53.792658   16056 system_pods.go:61] "coredns-7c65d6cfc9-2hlq5" [395f1a6b-43b8-403a-9a35-10caa1ab6558] Running
	I1028 12:32:53.792658   16056 system_pods.go:61] "etcd-embed-certs-232900" [558be6ef-bac4-4a32-a3d6-cdc47f8344fb] Running
	I1028 12:32:53.792658   16056 system_pods.go:61] "kube-apiserver-embed-certs-232900" [206ca99a-e632-4649-a34f-17dc83c4fb33] Running
	I1028 12:32:53.792658   16056 system_pods.go:61] "kube-controller-manager-embed-certs-232900" [ace3ddbe-64e2-4c5a-8ce6-d0af90e436a1] Running
	I1028 12:32:53.792658   16056 system_pods.go:61] "kube-proxy-gh62j" [f7899f60-d8d0-4f74-834f-e9637fb4934a] Running
	I1028 12:32:53.792658   16056 system_pods.go:61] "kube-scheduler-embed-certs-232900" [520f7314-72ab-4cbd-bd97-cdb3b8c02892] Running
	I1028 12:32:53.792658   16056 system_pods.go:61] "metrics-server-6867b74b74-4cl9p" [3f51156c-f7b3-4302-9de8-ce2c3430421c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:32:53.792658   16056 system_pods.go:61] "storage-provisioner" [604184d3-5d5f-4905-a789-35cb6b2c6404] Running
	I1028 12:32:53.792658   16056 system_pods.go:74] duration metric: took 4.8321958s to wait for pod list to return data ...
	I1028 12:32:53.792658   16056 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:32:53.801882   16056 default_sa.go:45] found service account: "default"
	I1028 12:32:53.801882   16056 default_sa.go:55] duration metric: took 9.2234ms for default service account to be created ...
	I1028 12:32:53.801882   16056 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:32:53.818291   16056 system_pods.go:86] 8 kube-system pods found
	I1028 12:32:53.818291   16056 system_pods.go:89] "coredns-7c65d6cfc9-2hlq5" [395f1a6b-43b8-403a-9a35-10caa1ab6558] Running
	I1028 12:32:53.818291   16056 system_pods.go:89] "etcd-embed-certs-232900" [558be6ef-bac4-4a32-a3d6-cdc47f8344fb] Running
	I1028 12:32:53.818291   16056 system_pods.go:89] "kube-apiserver-embed-certs-232900" [206ca99a-e632-4649-a34f-17dc83c4fb33] Running
	I1028 12:32:53.818291   16056 system_pods.go:89] "kube-controller-manager-embed-certs-232900" [ace3ddbe-64e2-4c5a-8ce6-d0af90e436a1] Running
	I1028 12:32:53.818291   16056 system_pods.go:89] "kube-proxy-gh62j" [f7899f60-d8d0-4f74-834f-e9637fb4934a] Running
	I1028 12:32:53.818291   16056 system_pods.go:89] "kube-scheduler-embed-certs-232900" [520f7314-72ab-4cbd-bd97-cdb3b8c02892] Running
	I1028 12:32:53.818291   16056 system_pods.go:89] "metrics-server-6867b74b74-4cl9p" [3f51156c-f7b3-4302-9de8-ce2c3430421c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:32:53.818291   16056 system_pods.go:89] "storage-provisioner" [604184d3-5d5f-4905-a789-35cb6b2c6404] Running
	I1028 12:32:53.818291   16056 system_pods.go:126] duration metric: took 16.4083ms to wait for k8s-apps to be running ...
	I1028 12:32:53.818291   16056 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:32:53.838102   16056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:32:53.866563   16056 system_svc.go:56] duration metric: took 48.2695ms WaitForService to wait for kubelet
	I1028 12:32:53.866563   16056 kubeadm.go:582] duration metric: took 4m33.3354502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:32:53.866563   16056 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:32:53.875702   16056 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1028 12:32:53.875702   16056 node_conditions.go:123] node cpu capacity is 16
	I1028 12:32:53.875702   16056 node_conditions.go:105] duration metric: took 9.1392ms to run NodePressure ...
	I1028 12:32:53.875702   16056 start.go:241] waiting for startup goroutines ...
	I1028 12:32:53.875702   16056 start.go:246] waiting for cluster config update ...
	I1028 12:32:53.875702   16056 start.go:255] writing updated cluster config ...
	I1028 12:32:53.887685   16056 ssh_runner.go:195] Run: rm -f paused
	I1028 12:32:54.034191   16056 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:32:54.037541   16056 out.go:177] * Done! kubectl is now configured to use "embed-certs-232900" cluster and "default" namespace by default
	I1028 12:32:50.627827   10000 cli_runner.go:164] Run: docker container inspect newest-cni-177500 --format={{.State.Running}}
	I1028 12:32:50.728482   10000 cli_runner.go:164] Run: docker container inspect newest-cni-177500 --format={{.State.Status}}
	I1028 12:32:50.813486   10000 cli_runner.go:164] Run: docker exec newest-cni-177500 stat /var/lib/dpkg/alternatives/iptables
	I1028 12:32:50.965492   10000 oci.go:144] the created container "newest-cni-177500" has a running status.
	I1028 12:32:50.965492   10000 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa...
	I1028 12:32:51.369018   10000 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1028 12:32:51.519986   10000 cli_runner.go:164] Run: docker container inspect newest-cni-177500 --format={{.State.Status}}
	I1028 12:32:51.606168   10000 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1028 12:32:51.606168   10000 kic_runner.go:114] Args: [docker exec --privileged newest-cni-177500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1028 12:32:51.775713   10000 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa...
	I1028 12:32:54.468925   10000 cli_runner.go:164] Run: docker container inspect newest-cni-177500 --format={{.State.Status}}
	I1028 12:32:54.532936   10000 machine.go:93] provisionDockerMachine start ...
	I1028 12:32:54.542935   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:54.618393   10000 main.go:141] libmachine: Using SSH client type: native
	I1028 12:32:54.631941   10000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 49177 <nil> <nil>}
	I1028 12:32:54.631941   10000 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:32:54.810408   10000 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-177500
	
	I1028 12:32:54.810408   10000 ubuntu.go:169] provisioning hostname "newest-cni-177500"
	I1028 12:32:54.819341   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:54.914416   10000 main.go:141] libmachine: Using SSH client type: native
	I1028 12:32:54.914416   10000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 49177 <nil> <nil>}
	I1028 12:32:54.914416   10000 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-177500 && echo "newest-cni-177500" | sudo tee /etc/hostname
	I1028 12:32:55.126043   10000 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-177500
	
	I1028 12:32:55.140048   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:55.221044   10000 main.go:141] libmachine: Using SSH client type: native
	I1028 12:32:55.222059   10000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 49177 <nil> <nil>}
	I1028 12:32:55.222059   10000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-177500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-177500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-177500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:32:53.052144    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:55.544167    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:53.652367    1884 pod_ready.go:103] pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:54.636818    1884 pod_ready.go:82] duration metric: took 4m0.0009009s for pod "metrics-server-6867b74b74-cjtxb" in "kube-system" namespace to be "Ready" ...
	E1028 12:32:54.636924    1884 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:32:54.637005    1884 pod_ready.go:39] duration metric: took 4m10.2204734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:32:54.637090    1884 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:32:54.647353    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:32:54.697406    1884 logs.go:282] 2 containers: [c555d3f33c77 5db4fa9fc4a5]
	I1028 12:32:54.706414    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:32:54.758964    1884 logs.go:282] 2 containers: [8df4879e9bd1 1342c1533676]
	I1028 12:32:54.767958    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:32:54.809327    1884 logs.go:282] 2 containers: [159eafbef1b8 afd1e890f3db]
	I1028 12:32:54.818327    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:32:54.860327    1884 logs.go:282] 2 containers: [aed442d6447b 1d7ff665f83b]
	I1028 12:32:54.876540    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:32:54.934417    1884 logs.go:282] 2 containers: [19a7edeb4225 822fee0d5274]
	I1028 12:32:54.943410    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:32:54.994048    1884 logs.go:282] 2 containers: [5368e9f32ce6 07c04bb06167]
	I1028 12:32:55.003054    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:32:55.049684    1884 logs.go:282] 0 containers: []
	W1028 12:32:55.049776    1884 logs.go:284] No container was found matching "kindnet"
	I1028 12:32:55.060069    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:32:55.109041    1884 logs.go:282] 2 containers: [9f43aea2dc7b 56a9dda8426c]
	I1028 12:32:55.124059    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:32:55.170058    1884 logs.go:282] 1 containers: [af5834eed982]
	I1028 12:32:55.170058    1884 logs.go:123] Gathering logs for kube-apiserver [c555d3f33c77] ...
	I1028 12:32:55.170058    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c555d3f33c77"
	I1028 12:32:55.227071    1884 logs.go:123] Gathering logs for coredns [159eafbef1b8] ...
	I1028 12:32:55.227071    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 159eafbef1b8"
	I1028 12:32:55.283644    1884 logs.go:123] Gathering logs for kube-controller-manager [5368e9f32ce6] ...
	I1028 12:32:55.283644    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5368e9f32ce6"
	I1028 12:32:55.361265    1884 logs.go:123] Gathering logs for etcd [8df4879e9bd1] ...
	I1028 12:32:55.362847    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df4879e9bd1"
	I1028 12:32:55.457170    1884 logs.go:123] Gathering logs for coredns [afd1e890f3db] ...
	I1028 12:32:55.457170    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afd1e890f3db"
	I1028 12:32:55.508160    1884 logs.go:123] Gathering logs for kube-scheduler [aed442d6447b] ...
	I1028 12:32:55.508160    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed442d6447b"
	I1028 12:32:55.552165    1884 logs.go:123] Gathering logs for kube-scheduler [1d7ff665f83b] ...
	I1028 12:32:55.552165    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7ff665f83b"
	I1028 12:32:55.614370    1884 logs.go:123] Gathering logs for storage-provisioner [9f43aea2dc7b] ...
	I1028 12:32:55.614370    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f43aea2dc7b"
	I1028 12:32:55.658682    1884 logs.go:123] Gathering logs for storage-provisioner [56a9dda8426c] ...
	I1028 12:32:55.658682    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a9dda8426c"
	I1028 12:32:55.703694    1884 logs.go:123] Gathering logs for Docker ...
	I1028 12:32:55.703694    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:32:55.755157    1884 logs.go:123] Gathering logs for kubelet ...
	I1028 12:32:55.755157    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:32:55.856199    1884 logs.go:123] Gathering logs for dmesg ...
	I1028 12:32:55.856199    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:32:55.885570    1884 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:32:55.885570    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:32:56.074155    1884 logs.go:123] Gathering logs for kube-proxy [19a7edeb4225] ...
	I1028 12:32:56.074155    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19a7edeb4225"
	I1028 12:32:56.131376    1884 logs.go:123] Gathering logs for kubernetes-dashboard [af5834eed982] ...
	I1028 12:32:56.131376    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af5834eed982"
	I1028 12:32:56.177389    1884 logs.go:123] Gathering logs for kube-apiserver [5db4fa9fc4a5] ...
	I1028 12:32:56.177389    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db4fa9fc4a5"
	I1028 12:32:56.285045    1884 logs.go:123] Gathering logs for etcd [1342c1533676] ...
	I1028 12:32:56.285045    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1342c1533676"
	I1028 12:32:56.388482    1884 logs.go:123] Gathering logs for kube-proxy [822fee0d5274] ...
	I1028 12:32:56.388482    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822fee0d5274"
	I1028 12:32:56.438237    1884 logs.go:123] Gathering logs for kube-controller-manager [07c04bb06167] ...
	I1028 12:32:56.438237    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07c04bb06167"
	I1028 12:32:56.507278    1884 logs.go:123] Gathering logs for container status ...
	I1028 12:32:56.507278    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:32:55.428159   10000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:32:55.428159   10000 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1028 12:32:55.428159   10000 ubuntu.go:177] setting up certificates
	I1028 12:32:55.428159   10000 provision.go:84] configureAuth start
	I1028 12:32:55.437157   10000 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-177500
	I1028 12:32:55.510168   10000 provision.go:143] copyHostCerts
	I1028 12:32:55.510168   10000 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1028 12:32:55.510168   10000 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1028 12:32:55.511165   10000 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1028 12:32:55.512167   10000 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1028 12:32:55.512167   10000 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1028 12:32:55.512167   10000 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1028 12:32:55.514187   10000 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1028 12:32:55.514187   10000 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1028 12:32:55.514187   10000 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1028 12:32:55.515176   10000 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-177500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-177500]
	I1028 12:32:56.074155   10000 provision.go:177] copyRemoteCerts
	I1028 12:32:56.086151   10000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:32:56.094896   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:56.167386   10000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49177 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa Username:docker}
	I1028 12:32:56.307966   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:32:56.357478   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 12:32:56.398481   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:32:56.448249   10000 provision.go:87] duration metric: took 1.0200479s to configureAuth
	I1028 12:32:56.448249   10000 ubuntu.go:193] setting minikube options for container-runtime
	I1028 12:32:56.448249   10000 config.go:182] Loaded profile config "newest-cni-177500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:32:56.457270   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:56.537254   10000 main.go:141] libmachine: Using SSH client type: native
	I1028 12:32:56.537254   10000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 49177 <nil> <nil>}
	I1028 12:32:56.537254   10000 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1028 12:32:56.712585   10000 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1028 12:32:56.712585   10000 ubuntu.go:71] root file system type: overlay
	I1028 12:32:56.712838   10000 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1028 12:32:56.725528   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:56.819369   10000 main.go:141] libmachine: Using SSH client type: native
	I1028 12:32:56.819662   10000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 49177 <nil> <nil>}
	I1028 12:32:56.819662   10000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1028 12:32:57.043418   10000 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1028 12:32:57.052204   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:57.134209   10000 main.go:141] libmachine: Using SSH client type: native
	I1028 12:32:57.135209   10000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x743340] 0x745e80 <nil>  [] 0s} 127.0.0.1 49177 <nil> <nil>}
	I1028 12:32:57.135209   10000 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1028 12:32:58.632566   10000 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-20 11:39:29.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-10-28 12:32:57.024047589 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1028 12:32:58.632566   10000 machine.go:96] duration metric: took 4.0994617s to provisionDockerMachine
	I1028 12:32:58.632566   10000 client.go:171] duration metric: took 36.9876802s to LocalClient.Create
	I1028 12:32:58.632566   10000 start.go:167] duration metric: took 36.9876802s to libmachine.API.Create "newest-cni-177500"
	I1028 12:32:58.632566   10000 start.go:293] postStartSetup for "newest-cni-177500" (driver="docker")
	I1028 12:32:58.632566   10000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:32:58.644143   10000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:32:58.654041   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:58.724716   10000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49177 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa Username:docker}
	I1028 12:32:58.862074   10000 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:32:58.872246   10000 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1028 12:32:58.872246   10000 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1028 12:32:58.872246   10000 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1028 12:32:58.872246   10000 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1028 12:32:58.872246   10000 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1028 12:32:58.872246   10000 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1028 12:32:58.875291   10000 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem -> 111762.pem in /etc/ssl/certs
	I1028 12:32:58.887571   10000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:32:58.911104   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem --> /etc/ssl/certs/111762.pem (1708 bytes)
	I1028 12:32:58.957142   10000 start.go:296] duration metric: took 324.562ms for postStartSetup
	I1028 12:32:58.969644   10000 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-177500
	I1028 12:32:59.046681   10000 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\config.json ...
	I1028 12:32:59.066162   10000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 12:32:59.074164   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:59.153591   10000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49177 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa Username:docker}
	I1028 12:32:59.289644   10000 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1028 12:32:59.303277   10000 start.go:128] duration metric: took 37.6633483s to createHost
	I1028 12:32:59.303277   10000 start.go:83] releasing machines lock for "newest-cni-177500", held for 37.6643439s
	I1028 12:32:59.313233   10000 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-177500
	I1028 12:32:59.382294   10000 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1028 12:32:59.391257   10000 ssh_runner.go:195] Run: cat /version.json
	I1028 12:32:59.393249   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:59.404392   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:32:59.472138   10000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49177 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa Username:docker}
	I1028 12:32:59.476679   10000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49177 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa Username:docker}
	W1028 12:32:59.597242   10000 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1028 12:32:59.610285   10000 ssh_runner.go:195] Run: systemctl --version
	I1028 12:32:59.637282   10000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 12:32:59.658256   10000 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1028 12:32:59.680947   10000 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	W1028 12:32:59.691788   10000 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1028 12:32:59.692809   10000 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1028 12:32:59.696830   10000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:32:59.753951   10000 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:32:59.753951   10000 start.go:495] detecting cgroup driver to use...
	I1028 12:32:59.753951   10000 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 12:32:59.754949   10000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:32:59.803219   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1028 12:32:59.844458   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1028 12:32:59.868518   10000 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1028 12:32:59.880361   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1028 12:32:59.913098   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:32:59.945315   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1028 12:32:59.974348   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1028 12:33:00.004325   10000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:33:00.040314   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1028 12:33:00.075455   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1028 12:33:00.112356   10000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1028 12:33:00.148370   10000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:33:00.177472   10000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:33:00.215806   10000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:32:57.551604    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:00.044329    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:32:59.115817    1884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:32:59.151587    1884 api_server.go:72] duration metric: took 4m22.6265876s to wait for apiserver process to appear ...
	I1028 12:32:59.151587    1884 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:32:59.161593    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:32:59.210011    1884 logs.go:282] 2 containers: [c555d3f33c77 5db4fa9fc4a5]
	I1028 12:32:59.219999    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:32:59.263641    1884 logs.go:282] 2 containers: [8df4879e9bd1 1342c1533676]
	I1028 12:32:59.271648    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:32:59.313233    1884 logs.go:282] 2 containers: [159eafbef1b8 afd1e890f3db]
	I1028 12:32:59.321250    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:32:59.364271    1884 logs.go:282] 2 containers: [aed442d6447b 1d7ff665f83b]
	I1028 12:32:59.373247    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:32:59.419694    1884 logs.go:282] 2 containers: [19a7edeb4225 822fee0d5274]
	I1028 12:32:59.429680    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:32:59.473657    1884 logs.go:282] 2 containers: [5368e9f32ce6 07c04bb06167]
	I1028 12:32:59.484670    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:32:59.528756    1884 logs.go:282] 0 containers: []
	W1028 12:32:59.528756    1884 logs.go:284] No container was found matching "kindnet"
	I1028 12:32:59.540711    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:32:59.586463    1884 logs.go:282] 1 containers: [af5834eed982]
	I1028 12:32:59.594257    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:32:59.642257    1884 logs.go:282] 2 containers: [9f43aea2dc7b 56a9dda8426c]
	I1028 12:32:59.642257    1884 logs.go:123] Gathering logs for kubernetes-dashboard [af5834eed982] ...
	I1028 12:32:59.642257    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af5834eed982"
	I1028 12:32:59.697798    1884 logs.go:123] Gathering logs for Docker ...
	I1028 12:32:59.697798    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:32:59.744784    1884 logs.go:123] Gathering logs for kubelet ...
	I1028 12:32:59.744784    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:32:59.857675    1884 logs.go:123] Gathering logs for dmesg ...
	I1028 12:32:59.857675    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:32:59.890532    1884 logs.go:123] Gathering logs for kube-scheduler [aed442d6447b] ...
	I1028 12:32:59.891057    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed442d6447b"
	I1028 12:32:59.935330    1884 logs.go:123] Gathering logs for kube-controller-manager [07c04bb06167] ...
	I1028 12:32:59.935330    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07c04bb06167"
	I1028 12:32:59.999315    1884 logs.go:123] Gathering logs for storage-provisioner [56a9dda8426c] ...
	I1028 12:32:59.999315    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a9dda8426c"
	I1028 12:33:00.044329    1884 logs.go:123] Gathering logs for container status ...
	I1028 12:33:00.044329    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:33:00.149871    1884 logs.go:123] Gathering logs for coredns [159eafbef1b8] ...
	I1028 12:33:00.149871    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 159eafbef1b8"
	I1028 12:33:00.203804    1884 logs.go:123] Gathering logs for coredns [afd1e890f3db] ...
	I1028 12:33:00.203804    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afd1e890f3db"
	I1028 12:33:00.247118    1884 logs.go:123] Gathering logs for kube-proxy [19a7edeb4225] ...
	I1028 12:33:00.247118    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19a7edeb4225"
	I1028 12:33:00.290506    1884 logs.go:123] Gathering logs for kube-controller-manager [5368e9f32ce6] ...
	I1028 12:33:00.290506    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5368e9f32ce6"
	I1028 12:33:00.372135    1884 logs.go:123] Gathering logs for etcd [1342c1533676] ...
	I1028 12:33:00.372135    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1342c1533676"
	I1028 12:33:00.475100    1884 logs.go:123] Gathering logs for kube-scheduler [1d7ff665f83b] ...
	I1028 12:33:00.475100    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7ff665f83b"
	I1028 12:33:00.543513    1884 logs.go:123] Gathering logs for kube-proxy [822fee0d5274] ...
	I1028 12:33:00.543513    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822fee0d5274"
	I1028 12:33:00.594589    1884 logs.go:123] Gathering logs for storage-provisioner [9f43aea2dc7b] ...
	I1028 12:33:00.594684    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f43aea2dc7b"
	I1028 12:33:00.657806    1884 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:33:00.657806    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:33:00.837388    1884 logs.go:123] Gathering logs for kube-apiserver [c555d3f33c77] ...
	I1028 12:33:00.837388    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c555d3f33c77"
	I1028 12:33:00.898386    1884 logs.go:123] Gathering logs for kube-apiserver [5db4fa9fc4a5] ...
	I1028 12:33:00.898386    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db4fa9fc4a5"
	I1028 12:33:01.007059    1884 logs.go:123] Gathering logs for etcd [8df4879e9bd1] ...
	I1028 12:33:01.007059    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df4879e9bd1"
	I1028 12:33:00.399100   10000 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1028 12:33:00.631417   10000 start.go:495] detecting cgroup driver to use...
	I1028 12:33:00.631508   10000 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1028 12:33:00.645793   10000 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1028 12:33:00.671821   10000 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1028 12:33:00.683811   10000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1028 12:33:00.707796   10000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:33:00.760052   10000 ssh_runner.go:195] Run: which cri-dockerd
	I1028 12:33:00.787346   10000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1028 12:33:00.811576   10000 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1028 12:33:00.858377   10000 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1028 12:33:01.046067   10000 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1028 12:33:01.226320   10000 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1028 12:33:01.226602   10000 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1028 12:33:01.272456   10000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:33:01.432385   10000 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1028 12:33:02.184907   10000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1028 12:33:02.221309   10000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 12:33:02.257735   10000 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1028 12:33:02.440982   10000 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1028 12:33:02.615747   10000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:33:02.772559   10000 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1028 12:33:02.814394   10000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1028 12:33:02.847622   10000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:33:03.003165   10000 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1028 12:33:03.156054   10000 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1028 12:33:03.167047   10000 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1028 12:33:03.177066   10000 start.go:563] Will wait 60s for crictl version
	I1028 12:33:03.188089   10000 ssh_runner.go:195] Run: which crictl
	I1028 12:33:03.210631   10000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:33:03.283499   10000 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1028 12:33:03.294482   10000 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 12:33:03.355191   10000 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1028 12:33:03.413854   10000 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1028 12:33:03.425222   10000 cli_runner.go:164] Run: docker exec -t newest-cni-177500 dig +short host.docker.internal
	I1028 12:33:03.593256   10000 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1028 12:33:03.604292   10000 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1028 12:33:03.615263   10000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:33:03.652292   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:33:03.728294   10000 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1028 12:33:03.730708   10000 kubeadm.go:883] updating cluster {Name:newest-cni-177500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-177500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:33:03.730708   10000 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 12:33:03.746502   10000 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 12:33:03.791428   10000 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 12:33:03.791428   10000 docker.go:619] Images already preloaded, skipping extraction
	I1028 12:33:03.800397   10000 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1028 12:33:03.845394   10000 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1028 12:33:03.845394   10000 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:33:03.845394   10000 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.2 docker true true} ...
	I1028 12:33:03.846398   10000 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-177500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-177500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:33:03.854396   10000 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1028 12:33:03.953064   10000 cni.go:84] Creating CNI manager for ""
	I1028 12:33:03.953601   10000 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 12:33:03.953717   10000 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1028 12:33:03.953791   10000 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-177500 NodeName:newest-cni-177500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:33:03.953859   10000 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-177500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:33:03.964828   10000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:33:03.982838   10000 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:33:03.995842   10000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:33:04.023669   10000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I1028 12:33:04.058283   10000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:33:04.092357   10000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2485 bytes)
	I1028 12:33:04.147475   10000 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1028 12:33:04.164232   10000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:33:04.199242   10000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:33:04.380855   10000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:33:04.412869   10000 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500 for IP: 192.168.76.2
	I1028 12:33:04.412869   10000 certs.go:194] generating shared ca certs ...
	I1028 12:33:04.412869   10000 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:04.412869   10000 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1028 12:33:04.414847   10000 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1028 12:33:04.414847   10000 certs.go:256] generating profile certs ...
	I1028 12:33:04.415866   10000 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\client.key
	I1028 12:33:04.415866   10000 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\client.crt with IP's: []
	I1028 12:33:04.836936   10000 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\client.crt ...
	I1028 12:33:04.836936   10000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\client.crt: {Name:mkb8066e42f6cd260117444fcda0860fc4642d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:04.838479   10000 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\client.key ...
	I1028 12:33:04.838479   10000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\client.key: {Name:mkcf1913b23b4d03be577945d854edd46d1cfea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:04.840402   10000 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.key.ab73d635
	I1028 12:33:04.840402   10000 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.crt.ab73d635 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1028 12:33:04.934722   10000 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.crt.ab73d635 ...
	I1028 12:33:04.934722   10000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.crt.ab73d635: {Name:mk1c96249e7eb001b2bca28e31a432372ad7e624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:04.934722   10000 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.key.ab73d635 ...
	I1028 12:33:04.934722   10000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.key.ab73d635: {Name:mk5b9d92c9c510ca83f48bf3e709e87e77402154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:04.937665   10000 certs.go:381] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.crt.ab73d635 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.crt
	I1028 12:33:04.949698   10000 certs.go:385] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.key.ab73d635 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.key
	I1028 12:33:04.952693   10000 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\proxy-client.key
	I1028 12:33:04.953096   10000 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\proxy-client.crt with IP's: []
	I1028 12:33:05.288244   10000 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\proxy-client.crt ...
	I1028 12:33:05.288244   10000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\proxy-client.crt: {Name:mkf289321f326e8f8d45b95da70f401f1322e5ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:05.289252   10000 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\proxy-client.key ...
	I1028 12:33:05.289252   10000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\proxy-client.key: {Name:mkb927b713cd0564662603db7a3ef69aeab05d3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:05.301266   10000 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176.pem (1338 bytes)
	W1028 12:33:05.302104   10000 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176_empty.pem, impossibly tiny 0 bytes
	I1028 12:33:05.302104   10000 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1028 12:33:05.302104   10000 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1028 12:33:05.302793   10000 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1028 12:33:05.302793   10000 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1028 12:33:05.303767   10000 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem (1708 bytes)
	I1028 12:33:05.305762   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:33:02.047572    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:04.544391    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:03.609268    1884 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65339/healthz ...
	I1028 12:33:03.622257    1884 api_server.go:279] https://127.0.0.1:65339/healthz returned 200:
	ok
	I1028 12:33:03.626262    1884 api_server.go:141] control plane version: v1.31.2
	I1028 12:33:03.626262    1884 api_server.go:131] duration metric: took 4.4744902s to wait for apiserver health ...
	I1028 12:33:03.626262    1884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:33:03.641262    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:33:03.686286    1884 logs.go:282] 2 containers: [c555d3f33c77 5db4fa9fc4a5]
	I1028 12:33:03.695276    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:33:03.741935    1884 logs.go:282] 2 containers: [8df4879e9bd1 1342c1533676]
	I1028 12:33:03.750503    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:33:03.789419    1884 logs.go:282] 2 containers: [159eafbef1b8 afd1e890f3db]
	I1028 12:33:03.798395    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:33:03.840406    1884 logs.go:282] 2 containers: [aed442d6447b 1d7ff665f83b]
	I1028 12:33:03.850397    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:33:03.901136    1884 logs.go:282] 2 containers: [19a7edeb4225 822fee0d5274]
	I1028 12:33:03.910135    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:33:03.952408    1884 logs.go:282] 2 containers: [5368e9f32ce6 07c04bb06167]
	I1028 12:33:03.962846    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:33:04.010008    1884 logs.go:282] 0 containers: []
	W1028 12:33:04.010120    1884 logs.go:284] No container was found matching "kindnet"
	I1028 12:33:04.021673    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:33:04.062276    1884 logs.go:282] 1 containers: [af5834eed982]
	I1028 12:33:04.070461    1884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:33:04.113270    1884 logs.go:282] 2 containers: [9f43aea2dc7b 56a9dda8426c]
	I1028 12:33:04.113270    1884 logs.go:123] Gathering logs for kube-scheduler [aed442d6447b] ...
	I1028 12:33:04.113270    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aed442d6447b"
	I1028 12:33:04.166922    1884 logs.go:123] Gathering logs for kube-scheduler [1d7ff665f83b] ...
	I1028 12:33:04.166922    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d7ff665f83b"
	I1028 12:33:04.233307    1884 logs.go:123] Gathering logs for container status ...
	I1028 12:33:04.233307    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:33:04.324168    1884 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:33:04.324168    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:33:04.540373    1884 logs.go:123] Gathering logs for etcd [8df4879e9bd1] ...
	I1028 12:33:04.540373    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8df4879e9bd1"
	I1028 12:33:04.630110    1884 logs.go:123] Gathering logs for coredns [159eafbef1b8] ...
	I1028 12:33:04.630110    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 159eafbef1b8"
	I1028 12:33:04.683093    1884 logs.go:123] Gathering logs for coredns [afd1e890f3db] ...
	I1028 12:33:04.683093    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 afd1e890f3db"
	I1028 12:33:04.734721    1884 logs.go:123] Gathering logs for storage-provisioner [9f43aea2dc7b] ...
	I1028 12:33:04.734721    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f43aea2dc7b"
	I1028 12:33:04.799536    1884 logs.go:123] Gathering logs for kube-apiserver [c555d3f33c77] ...
	I1028 12:33:04.799536    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c555d3f33c77"
	I1028 12:33:04.856424    1884 logs.go:123] Gathering logs for kube-apiserver [5db4fa9fc4a5] ...
	I1028 12:33:04.856424    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db4fa9fc4a5"
	I1028 12:33:04.971211    1884 logs.go:123] Gathering logs for kube-controller-manager [5368e9f32ce6] ...
	I1028 12:33:04.971211    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5368e9f32ce6"
	I1028 12:33:05.040592    1884 logs.go:123] Gathering logs for kube-controller-manager [07c04bb06167] ...
	I1028 12:33:05.041585    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07c04bb06167"
	I1028 12:33:05.108647    1884 logs.go:123] Gathering logs for kubelet ...
	I1028 12:33:05.109646    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:33:05.220585    1884 logs.go:123] Gathering logs for kube-proxy [822fee0d5274] ...
	I1028 12:33:05.220585    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822fee0d5274"
	I1028 12:33:05.279048    1884 logs.go:123] Gathering logs for kubernetes-dashboard [af5834eed982] ...
	I1028 12:33:05.279048    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af5834eed982"
	I1028 12:33:05.323982    1884 logs.go:123] Gathering logs for storage-provisioner [56a9dda8426c] ...
	I1028 12:33:05.323982    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a9dda8426c"
	I1028 12:33:05.370185    1884 logs.go:123] Gathering logs for dmesg ...
	I1028 12:33:05.370185    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:33:05.397181    1884 logs.go:123] Gathering logs for etcd [1342c1533676] ...
	I1028 12:33:05.397181    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1342c1533676"
	I1028 12:33:05.498460    1884 logs.go:123] Gathering logs for kube-proxy [19a7edeb4225] ...
	I1028 12:33:05.498460    1884 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19a7edeb4225"
	I1028 12:33:05.546598    1884 logs.go:123] Gathering logs for Docker ...
	I1028 12:33:05.546598    1884 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:33:08.105538    1884 system_pods.go:59] 8 kube-system pods found
	I1028 12:33:08.105538    1884 system_pods.go:61] "coredns-7c65d6cfc9-mrq5n" [8b5aa0b1-2cb7-4775-888d-e4bf73667e4e] Running
	I1028 12:33:08.106545    1884 system_pods.go:61] "etcd-default-k8s-diff-port-473100" [ea6afd58-6a98-439f-824d-251361163e42] Running
	I1028 12:33:08.106545    1884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473100" [906e481f-4d7a-43d9-86d3-fa1cc5cff7b0] Running
	I1028 12:33:08.106545    1884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473100" [47c86243-c460-4e03-9005-7340914c5dca] Running
	I1028 12:33:08.106545    1884 system_pods.go:61] "kube-proxy-g6fkb" [789668db-b289-4db2-80d7-aeb7db893a7a] Running
	I1028 12:33:08.106545    1884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473100" [f46a8faf-b2ec-4f4f-b559-d8569f5af619] Running
	I1028 12:33:08.106545    1884 system_pods.go:61] "metrics-server-6867b74b74-cjtxb" [e02b08d2-7fe7-4b0f-899b-2046ecabea31] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:33:08.106545    1884 system_pods.go:61] "storage-provisioner" [4efcf045-c94f-45ba-8d68-c077b19c3465] Running
	I1028 12:33:08.106545    1884 system_pods.go:74] duration metric: took 4.4800975s to wait for pod list to return data ...
	I1028 12:33:08.106545    1884 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:33:08.112539    1884 default_sa.go:45] found service account: "default"
	I1028 12:33:08.112539    1884 default_sa.go:55] duration metric: took 5.9934ms for default service account to be created ...
	I1028 12:33:08.112539    1884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:33:08.123559    1884 system_pods.go:86] 8 kube-system pods found
	I1028 12:33:08.123559    1884 system_pods.go:89] "coredns-7c65d6cfc9-mrq5n" [8b5aa0b1-2cb7-4775-888d-e4bf73667e4e] Running
	I1028 12:33:08.123559    1884 system_pods.go:89] "etcd-default-k8s-diff-port-473100" [ea6afd58-6a98-439f-824d-251361163e42] Running
	I1028 12:33:08.123559    1884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-473100" [906e481f-4d7a-43d9-86d3-fa1cc5cff7b0] Running
	I1028 12:33:08.123559    1884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-473100" [47c86243-c460-4e03-9005-7340914c5dca] Running
	I1028 12:33:08.123559    1884 system_pods.go:89] "kube-proxy-g6fkb" [789668db-b289-4db2-80d7-aeb7db893a7a] Running
	I1028 12:33:08.123559    1884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-473100" [f46a8faf-b2ec-4f4f-b559-d8569f5af619] Running
	I1028 12:33:08.123559    1884 system_pods.go:89] "metrics-server-6867b74b74-cjtxb" [e02b08d2-7fe7-4b0f-899b-2046ecabea31] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:33:08.123559    1884 system_pods.go:89] "storage-provisioner" [4efcf045-c94f-45ba-8d68-c077b19c3465] Running
	I1028 12:33:08.123559    1884 system_pods.go:126] duration metric: took 11.0196ms to wait for k8s-apps to be running ...
	I1028 12:33:08.123559    1884 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:33:08.134545    1884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:33:08.163861    1884 system_svc.go:56] duration metric: took 40.301ms WaitForService to wait for kubelet
	I1028 12:33:08.163861    1884 kubeadm.go:582] duration metric: took 4m31.6384893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:33:08.163861    1884 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:33:08.171353    1884 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1028 12:33:08.171353    1884 node_conditions.go:123] node cpu capacity is 16
	I1028 12:33:08.171353    1884 node_conditions.go:105] duration metric: took 7.4911ms to run NodePressure ...
	I1028 12:33:08.171353    1884 start.go:241] waiting for startup goroutines ...
	I1028 12:33:08.171353    1884 start.go:246] waiting for cluster config update ...
	I1028 12:33:08.171353    1884 start.go:255] writing updated cluster config ...
	I1028 12:33:08.185324    1884 ssh_runner.go:195] Run: rm -f paused
	I1028 12:33:08.335551    1884 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:33:08.340019    1884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-473100" cluster and "default" namespace by default
	I1028 12:33:05.358177   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1028 12:33:05.406175   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:33:05.461455   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1028 12:33:05.509451   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:33:05.561584   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:33:05.602582   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:33:05.646515   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-177500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:33:05.694429   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11176.pem --> /usr/share/ca-certificates/11176.pem (1338 bytes)
	I1028 12:33:05.743261   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\111762.pem --> /usr/share/ca-certificates/111762.pem (1708 bytes)
	I1028 12:33:05.791583   10000 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:33:05.840603   10000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:33:05.889336   10000 ssh_runner.go:195] Run: openssl version
	I1028 12:33:05.914737   10000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11176.pem && ln -fs /usr/share/ca-certificates/11176.pem /etc/ssl/certs/11176.pem"
	I1028 12:33:05.946916   10000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11176.pem
	I1028 12:33:05.960009   10000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:13 /usr/share/ca-certificates/11176.pem
	I1028 12:33:05.974237   10000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11176.pem
	I1028 12:33:06.002641   10000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11176.pem /etc/ssl/certs/51391683.0"
	I1028 12:33:06.040567   10000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111762.pem && ln -fs /usr/share/ca-certificates/111762.pem /etc/ssl/certs/111762.pem"
	I1028 12:33:06.082547   10000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111762.pem
	I1028 12:33:06.095474   10000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:13 /usr/share/ca-certificates/111762.pem
	I1028 12:33:06.107470   10000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111762.pem
	I1028 12:33:06.144277   10000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111762.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:33:06.178281   10000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:33:06.212280   10000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:33:06.222278   10000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 11:02 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:33:06.232280   10000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:33:06.259283   10000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:33:06.301946   10000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:33:06.319285   10000 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:33:06.320278   10000 kubeadm.go:392] StartCluster: {Name:newest-cni-177500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-177500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:33:06.329275   10000 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1028 12:33:06.389723   10000 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:33:06.421725   10000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:33:06.440728   10000 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1028 12:33:06.450723   10000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:33:06.467723   10000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:33:06.467723   10000 kubeadm.go:157] found existing configuration files:
	
	I1028 12:33:06.485003   10000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:33:06.517645   10000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:33:06.532447   10000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:33:06.562440   10000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:33:06.588444   10000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:33:06.598440   10000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:33:06.626470   10000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:33:06.643462   10000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:33:06.653468   10000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:33:06.683518   10000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:33:06.706635   10000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:33:06.720126   10000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:33:06.745236   10000 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1028 12:33:06.883251   10000 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1028 12:33:07.050576   10000 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:33:06.546456    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:08.547134    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:10.553851    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:13.048123    4716 pod_ready.go:103] pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace has status "Ready":"False"
	I1028 12:33:13.531597    4716 pod_ready.go:82] duration metric: took 4m0.0000834s for pod "metrics-server-9975d5f86-kgknk" in "kube-system" namespace to be "Ready" ...
	E1028 12:33:13.531597    4716 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:33:13.531597    4716 pod_ready.go:39] duration metric: took 5m24.6000367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:33:13.531597    4716 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:33:13.543575    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:33:13.594586    4716 logs.go:282] 2 containers: [17aaa55a6fdb 49e5e06a6361]
	I1028 12:33:13.603577    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:33:13.654612    4716 logs.go:282] 2 containers: [642d04757828 cc895678b294]
	I1028 12:33:13.666568    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:33:13.722587    4716 logs.go:282] 2 containers: [b380cacb66c6 4e391fcae110]
	I1028 12:33:13.733577    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:33:13.790617    4716 logs.go:282] 2 containers: [8379b070c9db 9ce4d10d6386]
	I1028 12:33:13.811823    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:33:13.859826    4716 logs.go:282] 2 containers: [0a1c612f812e ee68d9004e36]
	I1028 12:33:13.869818    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:33:13.921817    4716 logs.go:282] 2 containers: [84a52451395b 1a4a898cd699]
	I1028 12:33:13.940838    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:33:13.985834    4716 logs.go:282] 0 containers: []
	W1028 12:33:13.985834    4716 logs.go:284] No container was found matching "kindnet"
	I1028 12:33:14.001819    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:33:14.052812    4716 logs.go:282] 2 containers: [7fe2f6b267f7 befcb830733f]
	I1028 12:33:14.065055    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:33:14.121668    4716 logs.go:282] 1 containers: [7f41acfe30e7]
	I1028 12:33:14.121668    4716 logs.go:123] Gathering logs for kube-scheduler [8379b070c9db] ...
	I1028 12:33:14.121668    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8379b070c9db"
	I1028 12:33:14.178010    4716 logs.go:123] Gathering logs for kube-apiserver [17aaa55a6fdb] ...
	I1028 12:33:14.178010    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17aaa55a6fdb"
	I1028 12:33:14.257621    4716 logs.go:123] Gathering logs for etcd [642d04757828] ...
	I1028 12:33:14.257621    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642d04757828"
	I1028 12:33:14.336625    4716 logs.go:123] Gathering logs for kube-proxy [0a1c612f812e] ...
	I1028 12:33:14.336625    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a1c612f812e"
	I1028 12:33:14.396735    4716 logs.go:123] Gathering logs for kube-proxy [ee68d9004e36] ...
	I1028 12:33:14.396735    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee68d9004e36"
	I1028 12:33:14.449712    4716 logs.go:123] Gathering logs for Docker ...
	I1028 12:33:14.449712    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:33:14.502470    4716 logs.go:123] Gathering logs for dmesg ...
	I1028 12:33:14.502470    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:33:14.540108    4716 logs.go:123] Gathering logs for etcd [cc895678b294] ...
	I1028 12:33:14.540108    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc895678b294"
	I1028 12:33:14.624281    4716 logs.go:123] Gathering logs for kube-scheduler [9ce4d10d6386] ...
	I1028 12:33:14.624281    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce4d10d6386"
	I1028 12:33:14.695343    4716 logs.go:123] Gathering logs for kube-controller-manager [84a52451395b] ...
	I1028 12:33:14.695343    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a52451395b"
	I1028 12:33:14.769374    4716 logs.go:123] Gathering logs for storage-provisioner [7fe2f6b267f7] ...
	I1028 12:33:14.769374    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe2f6b267f7"
	I1028 12:33:14.829346    4716 logs.go:123] Gathering logs for storage-provisioner [befcb830733f] ...
	I1028 12:33:14.829346    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 befcb830733f"
	I1028 12:33:14.877347    4716 logs.go:123] Gathering logs for kube-apiserver [49e5e06a6361] ...
	I1028 12:33:14.877347    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49e5e06a6361"
	I1028 12:33:14.998130    4716 logs.go:123] Gathering logs for coredns [4e391fcae110] ...
	I1028 12:33:14.998130    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e391fcae110"
	I1028 12:33:15.059123    4716 logs.go:123] Gathering logs for coredns [b380cacb66c6] ...
	I1028 12:33:15.059123    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b380cacb66c6"
	I1028 12:33:15.118136    4716 logs.go:123] Gathering logs for kube-controller-manager [1a4a898cd699] ...
	I1028 12:33:15.118136    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a4a898cd699"
	I1028 12:33:15.208127    4716 logs.go:123] Gathering logs for kubernetes-dashboard [7f41acfe30e7] ...
	I1028 12:33:15.208127    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f41acfe30e7"
	I1028 12:33:15.261132    4716 logs.go:123] Gathering logs for container status ...
	I1028 12:33:15.261132    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:33:15.371385    4716 logs.go:123] Gathering logs for kubelet ...
	I1028 12:33:15.371385    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:33:15.460552    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:54 old-k8s-version-013200 kubelet[1888]: E1028 12:27:54.029375    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.466361    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:55 old-k8s-version-013200 kubelet[1888]: E1028 12:27:55.942392    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.466361    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:57 old-k8s-version-013200 kubelet[1888]: E1028 12:27:57.107195    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.479092    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:11 old-k8s-version-013200 kubelet[1888]: E1028 12:28:11.122198    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.484517    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:13 old-k8s-version-013200 kubelet[1888]: E1028 12:28:13.332946    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.485516    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:13 old-k8s-version-013200 kubelet[1888]: E1028 12:28:13.846655    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.486571    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:14 old-k8s-version-013200 kubelet[1888]: E1028 12:28:14.881950    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.487513    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:16 old-k8s-version-013200 kubelet[1888]: E1028 12:28:16.965583    1888 pod_workers.go:191] Error syncing pod 34dc73e1-5d6a-469b-90d3-812ffa9e7fe0 ("storage-provisioner_kube-system(34dc73e1-5d6a-469b-90d3-812ffa9e7fe0)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(34dc73e1-5d6a-469b-90d3-812ffa9e7fe0)"
	W1028 12:33:15.487513    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:23 old-k8s-version-013200 kubelet[1888]: E1028 12:28:23.025449    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.491518    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:35 old-k8s-version-013200 kubelet[1888]: E1028 12:28:35.546175    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.495526    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:38 old-k8s-version-013200 kubelet[1888]: E1028 12:28:38.073497    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.496104    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:50 old-k8s-version-013200 kubelet[1888]: E1028 12:28:50.021427    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.496298    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:51 old-k8s-version-013200 kubelet[1888]: E1028 12:28:51.020441    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.496555    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:02 old-k8s-version-013200 kubelet[1888]: E1028 12:29:02.017808    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.500406    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:03 old-k8s-version-013200 kubelet[1888]: E1028 12:29:03.445268    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.500406    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:15 old-k8s-version-013200 kubelet[1888]: E1028 12:29:15.015992    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.501395    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:17 old-k8s-version-013200 kubelet[1888]: E1028 12:29:17.032348    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.504277    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:27 old-k8s-version-013200 kubelet[1888]: E1028 12:29:27.070258    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.504277    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:33 old-k8s-version-013200 kubelet[1888]: E1028 12:29:33.013280    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.504277    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:42 old-k8s-version-013200 kubelet[1888]: E1028 12:29:42.013804    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.507568    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:44 old-k8s-version-013200 kubelet[1888]: E1028 12:29:44.435765    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.507568    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:53 old-k8s-version-013200 kubelet[1888]: E1028 12:29:53.013992    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.508458    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:59 old-k8s-version-013200 kubelet[1888]: E1028 12:29:59.014335    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.508804    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:06 old-k8s-version-013200 kubelet[1888]: E1028 12:30:06.009919    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.509276    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:11 old-k8s-version-013200 kubelet[1888]: E1028 12:30:11.010136    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.509620    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:18 old-k8s-version-013200 kubelet[1888]: E1028 12:30:18.010397    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.510277    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:23 old-k8s-version-013200 kubelet[1888]: E1028 12:30:23.025022    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.510508    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:33 old-k8s-version-013200 kubelet[1888]: E1028 12:30:33.007126    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.510807    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:38 old-k8s-version-013200 kubelet[1888]: E1028 12:30:38.006958    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.511182    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:46 old-k8s-version-013200 kubelet[1888]: E1028 12:30:46.005686    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.511671    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:53 old-k8s-version-013200 kubelet[1888]: E1028 12:30:53.006426    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.520503    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:00 old-k8s-version-013200 kubelet[1888]: E1028 12:31:00.054849    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:15.522499    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:06 old-k8s-version-013200 kubelet[1888]: E1028 12:31:06.472718    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:15.522499    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:13 old-k8s-version-013200 kubelet[1888]: E1028 12:31:13.003165    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.522499    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:18 old-k8s-version-013200 kubelet[1888]: E1028 12:31:18.003362    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.522499    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:24 old-k8s-version-013200 kubelet[1888]: E1028 12:31:24.003108    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:30 old-k8s-version-013200 kubelet[1888]: E1028 12:31:30.016995    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:38 old-k8s-version-013200 kubelet[1888]: E1028 12:31:38.998959    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:43 old-k8s-version-013200 kubelet[1888]: E1028 12:31:43.999179    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:52 old-k8s-version-013200 kubelet[1888]: E1028 12:31:52.999572    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.523497    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:58 old-k8s-version-013200 kubelet[1888]: E1028 12:31:58.999520    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.524497    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:08 old-k8s-version-013200 kubelet[1888]: E1028 12:32:08.001604    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.524497    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:10 old-k8s-version-013200 kubelet[1888]: E1028 12:32:10.996463    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.524497    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:22 old-k8s-version-013200 kubelet[1888]: E1028 12:32:22.996367    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.524497    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:23 old-k8s-version-013200 kubelet[1888]: E1028 12:32:23.996397    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.525564    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.007064    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.525564    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.992511    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.525564    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:48 old-k8s-version-013200 kubelet[1888]: E1028 12:32:48.993292    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.525564    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:51 old-k8s-version-013200 kubelet[1888]: E1028 12:32:51.994731    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.526513    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.526513    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.526513    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1028 12:33:15.526513    4716 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:33:15.526513    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:33:15.752348    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:15.752348    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 12:33:15.752896    4716 out.go:270] X Problems detected in kubelet:
	W1028 12:33:15.752896    4716 out.go:270]   Oct 28 12:32:48 old-k8s-version-013200 kubelet[1888]: E1028 12:32:48.993292    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.753018    4716 out.go:270]   Oct 28 12:32:51 old-k8s-version-013200 kubelet[1888]: E1028 12:32:51.994731    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.753018    4716 out.go:270]   Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.753018    4716 out.go:270]   Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:15.753018    4716 out.go:270]   Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1028 12:33:15.753018    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:15.753018    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:24.916119   10000 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:33:24.916119   10000 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:33:24.916119   10000 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:33:24.917144   10000 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:33:24.917144   10000 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:33:24.917144   10000 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:33:24.925129   10000 out.go:235]   - Generating certificates and keys ...
	I1028 12:33:24.925129   10000 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:33:24.925129   10000 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:33:24.925129   10000 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:33:24.926122   10000 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:33:24.926122   10000 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 12:33:24.926122   10000 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 12:33:24.926122   10000 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 12:33:24.927119   10000 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-177500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1028 12:33:24.927119   10000 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 12:33:24.927119   10000 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-177500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1028 12:33:24.927119   10000 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:33:24.928127   10000 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:33:24.928127   10000 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 12:33:24.928127   10000 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:33:24.928127   10000 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:33:24.928127   10000 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:33:24.929126   10000 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:33:24.929126   10000 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:33:24.929126   10000 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:33:24.929126   10000 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:33:24.930128   10000 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:33:24.936121   10000 out.go:235]   - Booting up control plane ...
	I1028 12:33:24.936121   10000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:33:24.936121   10000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:33:24.936121   10000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:33:24.937120   10000 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:33:24.937120   10000 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:33:24.937120   10000 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:33:24.937120   10000 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:33:24.938123   10000 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:33:24.938123   10000 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502805139s
	I1028 12:33:24.940111   10000 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:33:24.944119   10000 kubeadm.go:310] [api-check] The API server is healthy after 10.002602273s
	I1028 12:33:24.944119   10000 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:33:24.944119   10000 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:33:24.944119   10000 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:33:24.945198   10000 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-177500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:33:24.945198   10000 kubeadm.go:310] [bootstrap-token] Using token: wd6o1s.eilqfhx717k7ekyp
	I1028 12:33:24.951121   10000 out.go:235]   - Configuring RBAC rules ...
	I1028 12:33:24.951121   10000 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:33:24.952112   10000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:33:24.952112   10000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:33:24.952112   10000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:33:24.952112   10000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:33:24.953122   10000 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:33:24.953122   10000 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:33:24.953122   10000 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:33:24.953122   10000 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:33:24.953122   10000 kubeadm.go:310] 
	I1028 12:33:24.954144   10000 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:33:24.954144   10000 kubeadm.go:310] 
	I1028 12:33:24.954144   10000 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:33:24.954144   10000 kubeadm.go:310] 
	I1028 12:33:24.954144   10000 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:33:24.954144   10000 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:33:24.955237   10000 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:33:24.955237   10000 kubeadm.go:310] 
	I1028 12:33:24.955237   10000 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:33:24.955237   10000 kubeadm.go:310] 
	I1028 12:33:24.955237   10000 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:33:24.955237   10000 kubeadm.go:310] 
	I1028 12:33:24.955237   10000 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:33:24.955237   10000 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:33:24.956128   10000 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:33:24.956128   10000 kubeadm.go:310] 
	I1028 12:33:24.956128   10000 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:33:24.956128   10000 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:33:24.956128   10000 kubeadm.go:310] 
	I1028 12:33:24.956128   10000 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wd6o1s.eilqfhx717k7ekyp \
	I1028 12:33:24.957125   10000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2f78bf45fce8b572b50afc1e1b7e235ff30b2e9b5e24531c9d3fbda218f1a38f \
	I1028 12:33:24.957125   10000 kubeadm.go:310] 	--control-plane 
	I1028 12:33:24.957125   10000 kubeadm.go:310] 
	I1028 12:33:24.957125   10000 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:33:24.957125   10000 kubeadm.go:310] 
	I1028 12:33:24.957125   10000 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wd6o1s.eilqfhx717k7ekyp \
	I1028 12:33:24.958122   10000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2f78bf45fce8b572b50afc1e1b7e235ff30b2e9b5e24531c9d3fbda218f1a38f 
	I1028 12:33:24.958122   10000 cni.go:84] Creating CNI manager for ""
	I1028 12:33:24.958122   10000 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 12:33:24.964153   10000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:33:24.989129   10000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:33:25.009128   10000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:33:25.059649   10000 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:33:25.070640   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:25.075657   10000 ops.go:34] apiserver oom_adj: -16
	I1028 12:33:25.076678   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-177500 minikube.k8s.io/updated_at=2024_10_28T12_33_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f minikube.k8s.io/name=newest-cni-177500 minikube.k8s.io/primary=true
	I1028 12:33:25.330513   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:25.768536    4716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:33:25.798507    4716 api_server.go:72] duration metric: took 5m50.3897517s to wait for apiserver process to appear ...
	I1028 12:33:25.798507    4716 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:33:25.807542    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1028 12:33:25.857109    4716 logs.go:282] 2 containers: [17aaa55a6fdb 49e5e06a6361]
	I1028 12:33:25.866122    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1028 12:33:25.922322    4716 logs.go:282] 2 containers: [642d04757828 cc895678b294]
	I1028 12:33:25.935312    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1028 12:33:25.830537   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:26.340423   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:26.831579   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:27.334809   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:27.830895   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:28.332037   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:28.831870   10000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:33:28.973882   10000 kubeadm.go:1113] duration metric: took 3.914071s to wait for elevateKubeSystemPrivileges
	I1028 12:33:28.973882   10000 kubeadm.go:394] duration metric: took 22.6526662s to StartCluster
	I1028 12:33:28.973882   10000 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:28.973882   10000 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 12:33:28.976892   10000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:33:28.977880   10000 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 12:33:28.977880   10000 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:33:28.977880   10000 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-177500"
	I1028 12:33:28.977880   10000 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-177500"
	I1028 12:33:28.977880   10000 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1028 12:33:28.977880   10000 addons.go:69] Setting default-storageclass=true in profile "newest-cni-177500"
	I1028 12:33:28.977880   10000 host.go:66] Checking if "newest-cni-177500" exists ...
	I1028 12:33:28.978889   10000 config.go:182] Loaded profile config "newest-cni-177500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 12:33:28.977880   10000 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-177500"
	I1028 12:33:28.980890   10000 out.go:177] * Verifying Kubernetes components...
	I1028 12:33:29.010895   10000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:33:29.013887   10000 cli_runner.go:164] Run: docker container inspect newest-cni-177500 --format={{.State.Status}}
	I1028 12:33:29.016906   10000 cli_runner.go:164] Run: docker container inspect newest-cni-177500 --format={{.State.Status}}
	I1028 12:33:29.096887   10000 addons.go:234] Setting addon default-storageclass=true in "newest-cni-177500"
	I1028 12:33:29.096887   10000 host.go:66] Checking if "newest-cni-177500" exists ...
	I1028 12:33:29.099896   10000 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:33:29.101896   10000 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:33:29.101896   10000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:33:29.114897   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:33:29.125914   10000 cli_runner.go:164] Run: docker container inspect newest-cni-177500 --format={{.State.Status}}
	I1028 12:33:29.197454   10000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49177 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa Username:docker}
	I1028 12:33:29.202452   10000 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:33:29.202452   10000 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:33:29.211451   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:33:29.280529   10000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49177 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-177500\id_rsa Username:docker}
	I1028 12:33:29.577801   10000 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 12:33:29.709929   10000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:33:29.711169   10000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:33:30.007648   10000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:33:25.982483    4716 logs.go:282] 2 containers: [b380cacb66c6 4e391fcae110]
	I1028 12:33:25.996437    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1028 12:33:26.041627    4716 logs.go:282] 2 containers: [8379b070c9db 9ce4d10d6386]
	I1028 12:33:26.050629    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1028 12:33:26.098908    4716 logs.go:282] 2 containers: [0a1c612f812e ee68d9004e36]
	I1028 12:33:26.106901    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1028 12:33:26.155220    4716 logs.go:282] 2 containers: [84a52451395b 1a4a898cd699]
	I1028 12:33:26.168218    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1028 12:33:26.214235    4716 logs.go:282] 0 containers: []
	W1028 12:33:26.214235    4716 logs.go:284] No container was found matching "kindnet"
	I1028 12:33:26.224231    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1028 12:33:26.265220    4716 logs.go:282] 1 containers: [7f41acfe30e7]
	I1028 12:33:26.277229    4716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1028 12:33:26.329423    4716 logs.go:282] 2 containers: [7fe2f6b267f7 befcb830733f]
	I1028 12:33:26.329423    4716 logs.go:123] Gathering logs for kube-scheduler [9ce4d10d6386] ...
	I1028 12:33:26.329423    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ce4d10d6386"
	I1028 12:33:26.392436    4716 logs.go:123] Gathering logs for kube-controller-manager [84a52451395b] ...
	I1028 12:33:26.392436    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a52451395b"
	I1028 12:33:26.471265    4716 logs.go:123] Gathering logs for storage-provisioner [befcb830733f] ...
	I1028 12:33:26.472279    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 befcb830733f"
	I1028 12:33:26.523487    4716 logs.go:123] Gathering logs for container status ...
	I1028 12:33:26.523487    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:33:26.627567    4716 logs.go:123] Gathering logs for etcd [642d04757828] ...
	I1028 12:33:26.627567    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 642d04757828"
	I1028 12:33:26.703581    4716 logs.go:123] Gathering logs for coredns [4e391fcae110] ...
	I1028 12:33:26.703581    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e391fcae110"
	I1028 12:33:26.761576    4716 logs.go:123] Gathering logs for kube-controller-manager [1a4a898cd699] ...
	I1028 12:33:26.761576    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a4a898cd699"
	I1028 12:33:26.850582    4716 logs.go:123] Gathering logs for kubernetes-dashboard [7f41acfe30e7] ...
	I1028 12:33:26.850582    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f41acfe30e7"
	I1028 12:33:26.939799    4716 logs.go:123] Gathering logs for kube-apiserver [49e5e06a6361] ...
	I1028 12:33:26.939799    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49e5e06a6361"
	I1028 12:33:27.040996    4716 logs.go:123] Gathering logs for kube-proxy [ee68d9004e36] ...
	I1028 12:33:27.040996    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee68d9004e36"
	I1028 12:33:27.090255    4716 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:33:27.090255    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:33:27.330970    4716 logs.go:123] Gathering logs for kube-proxy [0a1c612f812e] ...
	I1028 12:33:27.330970    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a1c612f812e"
	I1028 12:33:27.389529    4716 logs.go:123] Gathering logs for Docker ...
	I1028 12:33:27.389529    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1028 12:33:27.446527    4716 logs.go:123] Gathering logs for kubelet ...
	I1028 12:33:27.446527    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:33:27.542005    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:54 old-k8s-version-013200 kubelet[1888]: E1028 12:27:54.029375    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.543013    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:55 old-k8s-version-013200 kubelet[1888]: E1028 12:27:55.942392    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.544004    4716 logs.go:138] Found kubelet problem: Oct 28 12:27:57 old-k8s-version-013200 kubelet[1888]: E1028 12:27:57.107195    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.547018    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:11 old-k8s-version-013200 kubelet[1888]: E1028 12:28:11.122198    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.550023    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:13 old-k8s-version-013200 kubelet[1888]: E1028 12:28:13.332946    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.551015    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:13 old-k8s-version-013200 kubelet[1888]: E1028 12:28:13.846655    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.551015    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:14 old-k8s-version-013200 kubelet[1888]: E1028 12:28:14.881950    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.552018    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:16 old-k8s-version-013200 kubelet[1888]: E1028 12:28:16.965583    1888 pod_workers.go:191] Error syncing pod 34dc73e1-5d6a-469b-90d3-812ffa9e7fe0 ("storage-provisioner_kube-system(34dc73e1-5d6a-469b-90d3-812ffa9e7fe0)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(34dc73e1-5d6a-469b-90d3-812ffa9e7fe0)"
	W1028 12:33:27.552018    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:23 old-k8s-version-013200 kubelet[1888]: E1028 12:28:23.025449    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.559024    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:35 old-k8s-version-013200 kubelet[1888]: E1028 12:28:35.546175    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.565028    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:38 old-k8s-version-013200 kubelet[1888]: E1028 12:28:38.073497    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.565028    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:50 old-k8s-version-013200 kubelet[1888]: E1028 12:28:50.021427    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.566028    4716 logs.go:138] Found kubelet problem: Oct 28 12:28:51 old-k8s-version-013200 kubelet[1888]: E1028 12:28:51.020441    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.566028    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:02 old-k8s-version-013200 kubelet[1888]: E1028 12:29:02.017808    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.570020    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:03 old-k8s-version-013200 kubelet[1888]: E1028 12:29:03.445268    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.570020    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:15 old-k8s-version-013200 kubelet[1888]: E1028 12:29:15.015992    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.571028    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:17 old-k8s-version-013200 kubelet[1888]: E1028 12:29:17.032348    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.574011    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:27 old-k8s-version-013200 kubelet[1888]: E1028 12:29:27.070258    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.574011    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:33 old-k8s-version-013200 kubelet[1888]: E1028 12:29:33.013280    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.574011    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:42 old-k8s-version-013200 kubelet[1888]: E1028 12:29:42.013804    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:44 old-k8s-version-013200 kubelet[1888]: E1028 12:29:44.435765    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:53 old-k8s-version-013200 kubelet[1888]: E1028 12:29:53.013992    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:29:59 old-k8s-version-013200 kubelet[1888]: E1028 12:29:59.014335    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:06 old-k8s-version-013200 kubelet[1888]: E1028 12:30:06.009919    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577004    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:11 old-k8s-version-013200 kubelet[1888]: E1028 12:30:11.010136    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577999    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:18 old-k8s-version-013200 kubelet[1888]: E1028 12:30:18.010397    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577999    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:23 old-k8s-version-013200 kubelet[1888]: E1028 12:30:23.025022    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577999    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:33 old-k8s-version-013200 kubelet[1888]: E1028 12:30:33.007126    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.577999    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:38 old-k8s-version-013200 kubelet[1888]: E1028 12:30:38.006958    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.578997    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:46 old-k8s-version-013200 kubelet[1888]: E1028 12:30:46.005686    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.578997    4716 logs.go:138] Found kubelet problem: Oct 28 12:30:53 old-k8s-version-013200 kubelet[1888]: E1028 12:30:53.006426    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.581005    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:00 old-k8s-version-013200 kubelet[1888]: E1028 12:31:00.054849    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W1028 12:33:27.582998    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:06 old-k8s-version-013200 kubelet[1888]: E1028 12:31:06.472718    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W1028 12:33:27.582998    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:13 old-k8s-version-013200 kubelet[1888]: E1028 12:31:13.003165    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.584004    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:18 old-k8s-version-013200 kubelet[1888]: E1028 12:31:18.003362    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.584004    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:24 old-k8s-version-013200 kubelet[1888]: E1028 12:31:24.003108    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.584004    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:30 old-k8s-version-013200 kubelet[1888]: E1028 12:31:30.016995    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.585006    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:38 old-k8s-version-013200 kubelet[1888]: E1028 12:31:38.998959    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.585006    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:43 old-k8s-version-013200 kubelet[1888]: E1028 12:31:43.999179    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.585006    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:52 old-k8s-version-013200 kubelet[1888]: E1028 12:31:52.999572    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.586005    4716 logs.go:138] Found kubelet problem: Oct 28 12:31:58 old-k8s-version-013200 kubelet[1888]: E1028 12:31:58.999520    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.586005    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:08 old-k8s-version-013200 kubelet[1888]: E1028 12:32:08.001604    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.586005    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:10 old-k8s-version-013200 kubelet[1888]: E1028 12:32:10.996463    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.586005    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:22 old-k8s-version-013200 kubelet[1888]: E1028 12:32:22.996367    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.587006    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:23 old-k8s-version-013200 kubelet[1888]: E1028 12:32:23.996397    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.587006    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.007064    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.992511    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:48 old-k8s-version-013200 kubelet[1888]: E1028 12:32:48.993292    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:32:51 old-k8s-version-013200 kubelet[1888]: E1028 12:32:51.994731    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.588008    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.589003    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.589003    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:16 old-k8s-version-013200 kubelet[1888]: E1028 12:33:16.990577    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.589003    4716 logs.go:138] Found kubelet problem: Oct 28 12:33:23 old-k8s-version-013200 kubelet[1888]: E1028 12:33:23.991584    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1028 12:33:27.589003    4716 logs.go:123] Gathering logs for dmesg ...
	I1028 12:33:27.589003    4716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:33:27.621013    4716 logs.go:123] Gathering logs for coredns [b380cacb66c6] ...
	I1028 12:33:27.621013    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b380cacb66c6"
	I1028 12:33:27.676401    4716 logs.go:123] Gathering logs for kube-scheduler [8379b070c9db] ...
	I1028 12:33:27.676401    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8379b070c9db"
	I1028 12:33:27.742685    4716 logs.go:123] Gathering logs for storage-provisioner [7fe2f6b267f7] ...
	I1028 12:33:27.742685    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fe2f6b267f7"
	I1028 12:33:27.808905    4716 logs.go:123] Gathering logs for kube-apiserver [17aaa55a6fdb] ...
	I1028 12:33:27.808905    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17aaa55a6fdb"
	I1028 12:33:27.883904    4716 logs.go:123] Gathering logs for etcd [cc895678b294] ...
	I1028 12:33:27.883904    4716 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc895678b294"
	I1028 12:33:27.964902    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:27.964902    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 12:33:27.964902    4716 out.go:270] X Problems detected in kubelet:
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:16 old-k8s-version-013200 kubelet[1888]: E1028 12:33:16.990577    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W1028 12:33:27.964902    4716 out.go:270]   Oct 28 12:33:23 old-k8s-version-013200 kubelet[1888]: E1028 12:33:23.991584    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1028 12:33:27.964902    4716 out.go:358] Setting ErrFile to fd 1748...
	I1028 12:33:27.964902    4716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:33:31.102055   10000 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.5240871s)
	I1028 12:33:31.102055   10000 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1028 12:33:31.755922   10000 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-177500" context rescaled to 1 replicas
	I1028 12:33:31.995560   10000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.2854308s)
	I1028 12:33:31.995956   10000 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.2846164s)
	I1028 12:33:31.996050   10000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.98832s)
	I1028 12:33:32.011563   10000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-177500
	I1028 12:33:32.083566   10000 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:33:32.084591   10000 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 12:33:32.088579   10000 addons.go:510] duration metric: took 3.1105702s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 12:33:32.097571   10000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:33:32.187391   10000 api_server.go:72] duration metric: took 3.2093786s to wait for apiserver process to appear ...
	I1028 12:33:32.188379   10000 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:33:32.188379   10000 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:49176/healthz ...
	I1028 12:33:32.205374   10000 api_server.go:279] https://127.0.0.1:49176/healthz returned 200:
	ok
	I1028 12:33:32.270882   10000 api_server.go:141] control plane version: v1.31.2
	I1028 12:33:32.270882   10000 api_server.go:131] duration metric: took 82.5002ms to wait for apiserver health ...
	I1028 12:33:32.270882   10000 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:33:32.288899   10000 system_pods.go:59] 8 kube-system pods found
	I1028 12:33:32.288899   10000 system_pods.go:61] "coredns-7c65d6cfc9-tjdrj" [55a41e36-ac83-488a-abdc-ed98f62ee29e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:33:32.288899   10000 system_pods.go:61] "coredns-7c65d6cfc9-zldrr" [3d1d4fd0-581e-42bf-8c94-64d607af9ce4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:33:32.288899   10000 system_pods.go:61] "etcd-newest-cni-177500" [96b44bb2-66ad-4b4c-a004-f81b8207c030] Running
	I1028 12:33:32.288899   10000 system_pods.go:61] "kube-apiserver-newest-cni-177500" [5229acf5-0056-4c23-bd7f-53267f934247] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:33:32.288899   10000 system_pods.go:61] "kube-controller-manager-newest-cni-177500" [81226cd4-89dd-4564-817b-12dff6366930] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:33:32.288899   10000 system_pods.go:61] "kube-proxy-n4r5d" [97a6a576-e6cb-4ba2-9c3d-b7e537d39a8b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 12:33:32.288899   10000 system_pods.go:61] "kube-scheduler-newest-cni-177500" [19eecc8e-5bdc-45c6-a364-2df938dc2696] Running
	I1028 12:33:32.288899   10000 system_pods.go:61] "storage-provisioner" [8f69d0b7-2f51-4924-ad1a-71c46b83e9c4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:33:32.288899   10000 system_pods.go:74] duration metric: took 18.0163ms to wait for pod list to return data ...
	I1028 12:33:32.288899   10000 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:33:32.379943   10000 default_sa.go:45] found service account: "default"
	I1028 12:33:32.379943   10000 default_sa.go:55] duration metric: took 91.0398ms for default service account to be created ...
	I1028 12:33:32.379943   10000 kubeadm.go:582] duration metric: took 3.4019223s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1028 12:33:32.379943   10000 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:33:32.389947   10000 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1028 12:33:32.389947   10000 node_conditions.go:123] node cpu capacity is 16
	I1028 12:33:32.389947   10000 node_conditions.go:105] duration metric: took 10.0036ms to run NodePressure ...
	I1028 12:33:32.389947   10000 start.go:241] waiting for startup goroutines ...
	I1028 12:33:32.389947   10000 start.go:246] waiting for cluster config update ...
	I1028 12:33:32.389947   10000 start.go:255] writing updated cluster config ...
	I1028 12:33:32.407946   10000 ssh_runner.go:195] Run: rm -f paused
	I1028 12:33:32.630364   10000 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:33:32.636083   10000 out.go:177] * Done! kubectl is now configured to use "newest-cni-177500" cluster and "default" namespace by default
	I1028 12:33:37.966552    4716 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:65120/healthz ...
	I1028 12:33:37.996926    4716 api_server.go:279] https://127.0.0.1:65120/healthz returned 200:
	ok
	I1028 12:33:38.005912    4716 out.go:201] 
	W1028 12:33:38.009398    4716 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1028 12:33:38.009521    4716 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1028 12:33:38.009552    4716 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1028 12:33:38.009643    4716 out.go:270] * 
	W1028 12:33:38.010818    4716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:33:38.016384    4716 out.go:201] 
	
	
	==> Docker <==
	Oct 28 12:28:35 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:28:35.218545273Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=d707bee5fe9971b1 traceID=0e5e9132284d43bc6bb89ea2e25a069b
	Oct 28 12:28:35 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:28:35.519233600Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=d707bee5fe9971b1 traceID=0e5e9132284d43bc6bb89ea2e25a069b
	Oct 28 12:28:35 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:28:35.519685670Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=d707bee5fe9971b1 traceID=0e5e9132284d43bc6bb89ea2e25a069b
	Oct 28 12:28:35 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:28:35.519910305Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=d707bee5fe9971b1 traceID=0e5e9132284d43bc6bb89ea2e25a069b
	Oct 28 12:28:38 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:28:38.056294103Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=b12c980e4f3142b2 traceID=78f509d5dd9d25bf87e5ad12283b2055
	Oct 28 12:28:38 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:28:38.057311662Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=b12c980e4f3142b2 traceID=78f509d5dd9d25bf87e5ad12283b2055
	Oct 28 12:28:38 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:28:38.072028954Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=b12c980e4f3142b2 traceID=78f509d5dd9d25bf87e5ad12283b2055
	Oct 28 12:29:03 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:03.246723927Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=98055f677aa051e9 traceID=a15cecfa0ce160f2b35de902a3d071f3
	Oct 28 12:29:03 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:03.432974423Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=98055f677aa051e9 traceID=a15cecfa0ce160f2b35de902a3d071f3
	Oct 28 12:29:03 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:03.433328277Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=98055f677aa051e9 traceID=a15cecfa0ce160f2b35de902a3d071f3
	Oct 28 12:29:03 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:03.433609420Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=98055f677aa051e9 traceID=a15cecfa0ce160f2b35de902a3d071f3
	Oct 28 12:29:27 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:27.058913706Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=5a8fd3d16dc7e6c8 traceID=035b9fe7696375f72e31b581cbdf3214
	Oct 28 12:29:27 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:27.059251560Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=5a8fd3d16dc7e6c8 traceID=035b9fe7696375f72e31b581cbdf3214
	Oct 28 12:29:27 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:27.068700048Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=5a8fd3d16dc7e6c8 traceID=035b9fe7696375f72e31b581cbdf3214
	Oct 28 12:29:44 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:44.231948244Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=4d4904ac1576cc17 traceID=4a6fe88659b0e4b4f09b45250bd76340
	Oct 28 12:29:44 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:44.426236751Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=4d4904ac1576cc17 traceID=4a6fe88659b0e4b4f09b45250bd76340
	Oct 28 12:29:44 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:44.426503993Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=4d4904ac1576cc17 traceID=4a6fe88659b0e4b4f09b45250bd76340
	Oct 28 12:29:44 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:29:44.426550101Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=4d4904ac1576cc17 traceID=4a6fe88659b0e4b4f09b45250bd76340
	Oct 28 12:31:00 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:31:00.044544656Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=df8c381c1965996b traceID=d512a1d1f260631c6a411c278ee4791c
	Oct 28 12:31:00 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:31:00.044815796Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=df8c381c1965996b traceID=d512a1d1f260631c6a411c278ee4791c
	Oct 28 12:31:00 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:31:00.053392858Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=df8c381c1965996b traceID=d512a1d1f260631c6a411c278ee4791c
	Oct 28 12:31:06 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:31:06.237606495Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=8ecd98c0d718c6e2 traceID=06b90404e242d6b390c4077f1c76b600
	Oct 28 12:31:06 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:31:06.456161446Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=8ecd98c0d718c6e2 traceID=06b90404e242d6b390c4077f1c76b600
	Oct 28 12:31:06 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:31:06.456434686Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=8ecd98c0d718c6e2 traceID=06b90404e242d6b390c4077f1c76b600
	Oct 28 12:31:06 old-k8s-version-013200 dockerd[1454]: time="2024-10-28T12:31:06.456487394Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=8ecd98c0d718c6e2 traceID=06b90404e242d6b390c4077f1c76b600
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f41acfe30e7d       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   4de37018faa2c       kubernetes-dashboard-cd95d586-gd2xl
	7fe2f6b267f79       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   b08781bf4c0e5       storage-provisioner
	b380cacb66c6a       bfe3a36ebd252                                                                                         5 minutes ago       Running             coredns                   1                   37753e62f9bb6       coredns-74ff55c5b-h4dhd
	0a1c612f812e5       10cc881966cfd                                                                                         5 minutes ago       Running             kube-proxy                1                   893633cf9f796       kube-proxy-wm5p7
	befcb830733f3       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   b08781bf4c0e5       storage-provisioner
	2a5640ca6896a       56cc512116c8f                                                                                         5 minutes ago       Running             busybox                   1                   396886d4e3c53       busybox
	8379b070c9db5       3138b6e3d4712                                                                                         6 minutes ago       Running             kube-scheduler            1                   4a040cf42cd62       kube-scheduler-old-k8s-version-013200
	17aaa55a6fdb8       ca9843d3b5454                                                                                         6 minutes ago       Running             kube-apiserver            1                   87bbae5d24689       kube-apiserver-old-k8s-version-013200
	84a52451395b6       b9fa1895dcaa6                                                                                         6 minutes ago       Running             kube-controller-manager   1                   384e1b0e711da       kube-controller-manager-old-k8s-version-013200
	642d047578285       0369cf4303ffd                                                                                         6 minutes ago       Running             etcd                      1                   0431813f57849       etcd-old-k8s-version-013200
	3514f7ce7b803       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              busybox                   0                   36b8923177ab2       busybox
	4e391fcae1107       bfe3a36ebd252                                                                                         8 minutes ago       Exited              coredns                   0                   76f4f0779b8ff       coredns-74ff55c5b-h4dhd
	ee68d9004e361       10cc881966cfd                                                                                         8 minutes ago       Exited              kube-proxy                0                   17e633ebd67ba       kube-proxy-wm5p7
	1a4a898cd699e       b9fa1895dcaa6                                                                                         9 minutes ago       Exited              kube-controller-manager   0                   563c723041074       kube-controller-manager-old-k8s-version-013200
	cc895678b2948       0369cf4303ffd                                                                                         9 minutes ago       Exited              etcd                      0                   656037cf24eba       etcd-old-k8s-version-013200
	9ce4d10d63860       3138b6e3d4712                                                                                         9 minutes ago       Exited              kube-scheduler            0                   9db62d7afb774       kube-scheduler-old-k8s-version-013200
	49e5e06a63618       ca9843d3b5454                                                                                         9 minutes ago       Exited              kube-apiserver            0                   524d5b899fb7e       kube-apiserver-old-k8s-version-013200
	
	
	==> coredns [4e391fcae110] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52194 - 533 "HINFO IN 5437421654588224770.7083235019242407903. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.068101404s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	I1028 12:25:06.577074       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 12:24:45.534653542 +0000 UTC m=+0.169094204) (total time: 21.045226233s):
	Trace[2019727887]: [21.045226233s] [21.045226233s] END
	E1028 12:25:06.577151       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I1028 12:25:06.577186       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 12:24:45.534490817 +0000 UTC m=+0.168931379) (total time: 21.045677402s):
	Trace[1427131847]: [21.045677402s] [21.045677402s] END
	E1028 12:25:06.577205       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I1028 12:25:06.594207       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 12:24:45.534708351 +0000 UTC m=+0.169149013) (total time: 21.062476306s):
	Trace[1474941318]: [21.062476306s] [21.062476306s] END
	E1028 12:25:06.594337       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E1028 12:26:43.926162       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=595&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E1028 12:26:43.926370       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=200&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E1028 12:26:43.926381       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=601&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [b380cacb66c6] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:33979 - 11026 "HINFO IN 7003834093766974206.9085858243824354663. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046818892s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1028 12:28:17.002609       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 12:27:55.919812383 +0000 UTC m=+0.105095947) (total time: 21.086270162s):
	Trace[2019727887]: [21.086270162s] [21.086270162s] END
	E1028 12:28:17.002767       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I1028 12:28:17.002878       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 12:27:55.920859744 +0000 UTC m=+0.106143308) (total time: 21.085228402s):
	Trace[1427131847]: [21.085228402s] [21.085228402s] END
	E1028 12:28:17.002945       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I1028 12:28:17.003833       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-28 12:27:55.921193395 +0000 UTC m=+0.106476859) (total time: 21.086198155s):
	Trace[911902081]: [21.086198155s] [21.086198155s] END
	E1028 12:28:17.004041       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               old-k8s-version-013200
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-013200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5c9469bd9a8248ec7cc78e5865e6dfb7edd2060f
	                    minikube.k8s.io/name=old-k8s-version-013200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_24_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:24:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-013200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:33:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:29:11 +0000   Mon, 28 Oct 2024 12:24:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:29:11 +0000   Mon, 28 Oct 2024 12:24:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:29:11 +0000   Mon, 28 Oct 2024 12:24:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:29:11 +0000   Mon, 28 Oct 2024 12:24:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.121.2
	  Hostname:    old-k8s-version-013200
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868684Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868684Ki
	  pods:               110
	System Info:
	  Machine ID:                 26178778663d48f4b999ca8164734524
	  System UUID:                26178778663d48f4b999ca8164734524
	  Boot ID:                    ef217568-0e74-4f75-a115-0b78189354fe
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 coredns-74ff55c5b-h4dhd                           100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m2s
	  kube-system                 etcd-old-k8s-version-013200                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-old-k8s-version-013200             250m (1%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-old-k8s-version-013200    200m (1%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-wm5p7                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 kube-scheduler-old-k8s-version-013200             100m (0%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-9975d5f86-kgknk                    100m (0%)     0 (0%)      200Mi (0%)       0 (0%)         6m59s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m57s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-q2nvm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-gd2xl               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%)   0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  9m39s (x8 over 9m39s)  kubelet     Node old-k8s-version-013200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s (x8 over 9m39s)  kubelet     Node old-k8s-version-013200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s (x7 over 9m39s)  kubelet     Node old-k8s-version-013200 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m16s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m15s                  kubelet     Node old-k8s-version-013200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s                  kubelet     Node old-k8s-version-013200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s                  kubelet     Node old-k8s-version-013200 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m15s                  kubelet     Node old-k8s-version-013200 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m5s                   kubelet     Node old-k8s-version-013200 status is now: NodeReady
	  Normal  Starting                 8m56s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m8s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m7s (x8 over 6m7s)    kubelet     Node old-k8s-version-013200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s (x8 over 6m7s)    kubelet     Node old-k8s-version-013200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s (x7 over 6m7s)    kubelet     Node old-k8s-version-013200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m7s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m46s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +2.630261] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:20] tmpfs: Unknown parameter 'noswap'
	[ +20.961356] tmpfs: Unknown parameter 'noswap'
	[ +26.899360] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:21] tmpfs: Unknown parameter 'noswap'
	[ +34.981231] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:22] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:23] tmpfs: Unknown parameter 'noswap'
	[ +12.309282] tmpfs: Unknown parameter 'noswap'
	[  +7.568301] tmpfs: Unknown parameter 'noswap'
	[ +22.891389] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:25] tmpfs: Unknown parameter 'noswap'
	[ +12.308307] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:26] tmpfs: Unknown parameter 'noswap'
	[ +12.955858] tmpfs: Unknown parameter 'noswap'
	[  +8.432291] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:27] tmpfs: Unknown parameter 'noswap'
	[ +13.282616] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:28] tmpfs: Unknown parameter 'noswap'
	[ +16.440726] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:32] tmpfs: Unknown parameter 'noswap'
	[Oct28 12:33] tmpfs: Unknown parameter 'noswap'
	[  +0.544265] tmpfs: Unknown parameter 'noswap'
	[ +11.506459] tmpfs: Unknown parameter 'noswap'
	[  +2.206961] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [642d04757828] <==
	2024-10-28 12:31:47.579888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:31:57.580021 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:32:07.576804 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:32:17.576303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:32:27.576512 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:32:37.573863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:32:41.871551 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-kgknk\" " with result "range_response_count:1 size:4053" took too long (334.687222ms) to execute
	2024-10-28 12:32:41.871823 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (250.698019ms) to execute
	2024-10-28 12:32:43.884086 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-kgknk\" " with result "range_response_count:1 size:4053" took too long (348.501102ms) to execute
	2024-10-28 12:32:45.431976 W | etcdserver: request "header:<ID:1700270294681264442 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.121.2\" mod_revision:1025 > success:<request_put:<key:\"/registry/masterleases/192.168.121.2\" value_size:68 lease:1700270294681264440 >> failure:<request_range:<key:\"/registry/masterleases/192.168.121.2\" > >>" with result "size:16" took too long (598.112094ms) to execute
	2024-10-28 12:32:45.443524 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (239.931503ms) to execute
	2024-10-28 12:32:45.443567 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-kgknk\" " with result "range_response_count:1 size:4053" took too long (407.397361ms) to execute
	2024-10-28 12:32:45.698245 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (110.351772ms) to execute
	2024-10-28 12:32:45.698345 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-kgknk\" " with result "range_response_count:1 size:4053" took too long (158.175329ms) to execute
	2024-10-28 12:32:47.641936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:32:47.734670 W | etcdserver: read-only range request "key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true " with result "range_response_count:0 size:5" took too long (154.451756ms) to execute
	2024-10-28 12:32:47.735000 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-kgknk\" " with result "range_response_count:1 size:4053" took too long (199.6021ms) to execute
	2024-10-28 12:32:57.573595 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:33:07.570252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:33:17.570623 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:33:24.822930 W | etcdserver: request "header:<ID:1700270294681264748 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.121.2\" mod_revision:1055 > success:<request_put:<key:\"/registry/masterleases/192.168.121.2\" value_size:68 lease:1700270294681264746 >> failure:<request_range:<key:\"/registry/masterleases/192.168.121.2\" > >>" with result "size:16" took too long (129.09225ms) to execute
	2024-10-28 12:33:27.570517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:33:30.713271 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (515.581777ms) to execute
	2024-10-28 12:33:31.744989 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (153.168908ms) to execute
	2024-10-28 12:33:37.566411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [cc895678b294] <==
	2024-10-28 12:25:29.469170 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:25:30.228900 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-old-k8s-version-013200\" " with result "range_response_count:1 size:5265" took too long (490.451928ms) to execute
	2024-10-28 12:25:31.822367 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (158.351423ms) to execute
	2024-10-28 12:25:39.467797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:25:49.467913 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:25:52.523300 W | etcdserver: read-only range request "key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true " with result "range_response_count:0 size:7" took too long (450.306573ms) to execute
	2024-10-28 12:25:52.523632 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-013200\" " with result "range_response_count:1 size:4267" took too long (217.216549ms) to execute
	2024-10-28 12:25:52.757667 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (144.145312ms) to execute
	2024-10-28 12:25:56.292243 W | etcdserver: request "header:<ID:1700270294626697897 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/old-k8s-version-013200\" mod_revision:522 > success:<request_put:<key:\"/registry/leases/kube-node-lease/old-k8s-version-013200\" value_size:577 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/old-k8s-version-013200\" > >>" with result "size:16" took too long (103.082633ms) to execute
	2024-10-28 12:25:56.917159 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (191.562166ms) to execute
	2024-10-28 12:25:56.917684 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (244.955637ms) to execute
	2024-10-28 12:25:59.467012 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:26:09.465928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:26:09.758028 W | etcdserver: read-only range request "key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true " with result "range_response_count:0 size:5" took too long (109.863419ms) to execute
	2024-10-28 12:26:10.518581 W | etcdserver: read-only range request "key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true " with result "range_response_count:0 size:5" took too long (161.954126ms) to execute
	2024-10-28 12:26:19.465213 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:26:23.330136 W | wal: sync duration of 1.012178845s, expected less than 1s
	2024-10-28 12:26:23.330687 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (668.163742ms) to execute
	2024-10-28 12:26:23.331227 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (859.627042ms) to execute
	2024-10-28 12:26:29.461730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:26:39.461351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-28 12:26:44.015295 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/10/28 12:26:44 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2024-10-28 12:26:44.024230 I | etcdserver: skipped leadership transfer for single voting member cluster
	WARNING: 2024/10/28 12:26:44 grpc: addrConn.createTransport failed to connect to {192.168.121.2:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.121.2:2379: connect: connection refused". Reconnecting...
	
	
	==> kernel <==
	 12:33:41 up  1:37,  0 users,  load average: 5.05, 5.70, 6.17
	Linux old-k8s-version-013200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [17aaa55a6fdb] <==
	E1028 12:30:55.288702       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1028 12:30:55.288713       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:31:11.369566       1 client.go:360] parsed scheme: "passthrough"
	I1028 12:31:11.369702       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 12:31:11.369713       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 12:31:52.115226       1 client.go:360] parsed scheme: "passthrough"
	I1028 12:31:52.115365       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 12:31:52.115383       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 12:32:30.891786       1 client.go:360] parsed scheme: "passthrough"
	I1028 12:32:30.891903       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 12:32:30.891915       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 12:32:45.433395       1 trace.go:205] Trace[257426063]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (28-Oct-2024 12:32:44.616) (total time: 817ms):
	Trace[257426063]: ---"Transaction committed" 813ms (12:32:00.433)
	Trace[257426063]: [817.118978ms] [817.118978ms] END
	W1028 12:32:49.802931       1 handler_proxy.go:102] no RequestInfo found in the context
	E1028 12:32:49.803175       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1028 12:32:49.803187       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:33:03.613275       1 client.go:360] parsed scheme: "passthrough"
	I1028 12:33:03.613389       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 12:33:03.613403       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1028 12:33:37.664302       1 client.go:360] parsed scheme: "passthrough"
	I1028 12:33:37.664531       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1028 12:33:37.664549       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [49e5e06a6361] <==
	W1028 12:26:44.021800       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I1028 12:26:44.021821       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I1028 12:26:44.022706       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I1028 12:26:44.021822       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I1028 12:26:44.022014       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I1028 12:26:44.022120       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I1028 12:26:44.022137       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W1028 12:26:44.022145       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I1028 12:26:44.022293       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I1028 12:26:44.022525       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W1028 12:26:44.022774       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.021732       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.022974       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.022999       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.022999       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.023049       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.023096       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I1028 12:26:44.022314       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W1028 12:26:44.023139       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.021750       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.023238       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.023302       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.023672       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.023708       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1028 12:26:44.025369       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-controller-manager [1a4a898cd699] <==
	I1028 12:24:39.627458       1 shared_informer.go:247] Caches are synced for attach detach 
	I1028 12:24:39.627711       1 shared_informer.go:247] Caches are synced for taint 
	I1028 12:24:39.627783       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1028 12:24:39.627807       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W1028 12:24:39.627919       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-013200. Assuming now as a timestamp.
	I1028 12:24:39.627988       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	I1028 12:24:39.628165       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1028 12:24:39.628280       1 event.go:291] "Event occurred" object="old-k8s-version-013200" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-013200 event: Registered Node old-k8s-version-013200 in Controller"
	I1028 12:24:39.627487       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1028 12:24:39.635410       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tq76q"
	I1028 12:24:39.739072       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1028 12:24:39.755928       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-h4dhd"
	I1028 12:24:39.756103       1 range_allocator.go:373] Set node old-k8s-version-013200 PodCIDR to [10.244.0.0/24]
	I1028 12:24:39.931030       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wm5p7"
	I1028 12:24:40.128084       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1028 12:24:40.128212       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1028 12:24:40.139446       1 shared_informer.go:247] Caches are synced for garbage collector 
	E1028 12:24:40.237127       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"33f8f492-2357-41d8-8c10-a7d631962bf9", ResourceVersion:"275", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865715064, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014f06a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014f06c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0014f06e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0018d4840), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014f0
700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014f0720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014f0760)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e0f0e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001782da8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ad64d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000313f00)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001782df8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1028 12:24:40.431264       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"33f8f492-2357-41d8-8c10-a7d631962bf9", ResourceVersion:"409", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865715064, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014f0d40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014f0da0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014f0e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014f0ec0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014f0f20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001935d80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014f11c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014f1220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014f12e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002105440), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a0d858), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00040cc40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001442b08)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001a0d8a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I1028 12:24:43.780096       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1028 12:24:43.992687       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-tq76q"
	I1028 12:26:41.539958       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I1028 12:26:41.613285       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E1028 12:26:41.633597       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I1028 12:26:42.653220       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-kgknk"
	
	
	==> kube-controller-manager [84a52451395b] <==
	W1028 12:29:16.015237       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 12:29:42.117542       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 12:29:47.664223       1 request.go:655] Throttling request took 1.048482287s, request: GET:https://192.168.121.2:8443/apis/batch/v1?timeout=32s
	W1028 12:29:48.516215       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 12:30:12.617034       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 12:30:20.163367       1 request.go:655] Throttling request took 1.047449904s, request: GET:https://192.168.121.2:8443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
	W1028 12:30:21.018956       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 12:30:43.118538       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 12:30:52.666704       1 request.go:655] Throttling request took 1.04708329s, request: GET:https://192.168.121.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W1028 12:30:53.518751       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 12:31:13.618652       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 12:31:25.166372       1 request.go:655] Throttling request took 1.04698421s, request: GET:https://192.168.121.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W1028 12:31:26.018502       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 12:31:44.118247       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 12:31:57.666354       1 request.go:655] Throttling request took 1.047757771s, request: GET:https://192.168.121.2:8443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
	W1028 12:31:58.518020       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 12:32:14.618090       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 12:32:30.162277       1 request.go:655] Throttling request took 1.046036656s, request: GET:https://192.168.121.2:8443/apis/batch/v1?timeout=32s
	W1028 12:32:31.077225       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 12:32:45.118021       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 12:33:02.724455       1 request.go:655] Throttling request took 1.047527463s, request: GET:https://192.168.121.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W1028 12:33:03.579301       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1028 12:33:15.618130       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1028 12:33:35.227425       1 request.go:655] Throttling request took 1.046276728s, request: GET:https://192.168.121.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W1028 12:33:36.081671       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [0a1c612f812e] <==
	I1028 12:27:55.743794       1 node.go:172] Successfully retrieved node IP: 192.168.121.2
	I1028 12:27:55.743961       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.121.2), assume IPv4 operation
	W1028 12:27:55.938545       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1028 12:27:55.938906       1 server_others.go:185] Using iptables Proxier.
	I1028 12:27:55.939860       1 server.go:650] Version: v1.20.0
	I1028 12:27:55.941523       1 config.go:315] Starting service config controller
	I1028 12:27:55.941641       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1028 12:27:55.941740       1 config.go:224] Starting endpoint slice config controller
	I1028 12:27:55.941749       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1028 12:27:56.042522       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1028 12:27:56.042792       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [ee68d9004e36] <==
	I1028 12:24:45.583139       1 node.go:172] Successfully retrieved node IP: 192.168.121.2
	I1028 12:24:45.583511       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.121.2), assume IPv4 operation
	W1028 12:24:45.705054       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1028 12:24:45.705292       1 server_others.go:185] Using iptables Proxier.
	I1028 12:24:45.706057       1 server.go:650] Version: v1.20.0
	I1028 12:24:45.707911       1 config.go:315] Starting service config controller
	I1028 12:24:45.708007       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1028 12:24:45.707953       1 config.go:224] Starting endpoint slice config controller
	I1028 12:24:45.708033       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1028 12:24:45.808643       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1028 12:24:45.809029       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [8379b070c9db] <==
	I1028 12:27:41.059485       1 serving.go:331] Generated self-signed cert in-memory
	W1028 12:27:48.913695       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:27:48.913808       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1028 12:27:48.913830       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:27:48.913913       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:27:49.210539       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 12:27:49.210879       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 12:27:49.212061       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1028 12:27:49.212085       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1028 12:27:49.411389       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9ce4d10d6386] <==
	E1028 12:24:18.430884       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 12:24:18.431337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 12:24:18.431458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:24:18.431706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:24:18.432112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 12:24:18.432249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:24:18.432703       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 12:24:18.432753       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 12:24:18.433030       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:24:18.433512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 12:24:18.433824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:24:18.434259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 12:24:19.254361       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:24:19.309235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:24:19.334749       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 12:24:19.458004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:24:19.475607       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:24:19.523912       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 12:24:19.585973       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:24:19.594767       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 12:24:19.652448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 12:24:19.657987       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 12:24:19.751024       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 12:24:20.036584       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1028 12:24:21.943887       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 28 12:31:18 old-k8s-version-013200 kubelet[1888]: E1028 12:31:18.003362    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:31:24 old-k8s-version-013200 kubelet[1888]: E1028 12:31:24.003108    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:31:30 old-k8s-version-013200 kubelet[1888]: E1028 12:31:30.016995    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:31:38 old-k8s-version-013200 kubelet[1888]: E1028 12:31:38.998959    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:31:43 old-k8s-version-013200 kubelet[1888]: E1028 12:31:43.999179    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:31:52 old-k8s-version-013200 kubelet[1888]: E1028 12:31:52.999572    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:31:58 old-k8s-version-013200 kubelet[1888]: E1028 12:31:58.999520    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:32:08 old-k8s-version-013200 kubelet[1888]: E1028 12:32:08.001604    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:32:10 old-k8s-version-013200 kubelet[1888]: E1028 12:32:10.996463    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:32:22 old-k8s-version-013200 kubelet[1888]: E1028 12:32:22.996367    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:32:23 old-k8s-version-013200 kubelet[1888]: E1028 12:32:23.996397    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:32:34 old-k8s-version-013200 kubelet[1888]: W1028 12:32:34.078244    1888 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Oct 28 12:32:34 old-k8s-version-013200 kubelet[1888]: W1028 12:32:34.080475    1888 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.007064    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:32:37 old-k8s-version-013200 kubelet[1888]: E1028 12:32:37.992511    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:32:48 old-k8s-version-013200 kubelet[1888]: E1028 12:32:48.993292    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:32:51 old-k8s-version-013200 kubelet[1888]: E1028 12:32:51.994731    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:33:01 old-k8s-version-013200 kubelet[1888]: E1028 12:33:00.994748    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:33:05 old-k8s-version-013200 kubelet[1888]: E1028 12:33:05.989461    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:33:11 old-k8s-version-013200 kubelet[1888]: E1028 12:33:11.989269    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:33:16 old-k8s-version-013200 kubelet[1888]: E1028 12:33:16.990577    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:33:23 old-k8s-version-013200 kubelet[1888]: E1028 12:33:23.991584    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:33:28 old-k8s-version-013200 kubelet[1888]: E1028 12:33:28.990187    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Oct 28 12:33:34 old-k8s-version-013200 kubelet[1888]: E1028 12:33:34.986162    1888 pod_workers.go:191] Error syncing pod 2a9dad11-b2c1-4cc9-8233-7918a9467ef2 ("metrics-server-9975d5f86-kgknk_kube-system(2a9dad11-b2c1-4cc9-8233-7918a9467ef2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 28 12:33:41 old-k8s-version-013200 kubelet[1888]: E1028 12:33:41.019175    1888 pod_workers.go:191] Error syncing pod 6a8f958f-059f-46cc-bd8e-fce6c797bcf6 ("dashboard-metrics-scraper-8d5bb5db8-q2nvm_kubernetes-dashboard(6a8f958f-059f-46cc-bd8e-fce6c797bcf6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [7f41acfe30e7] <==
	2024/10/28 12:28:35 Starting overwatch
	2024/10/28 12:28:35 Using namespace: kubernetes-dashboard
	2024/10/28 12:28:35 Using in-cluster config to connect to apiserver
	2024/10/28 12:28:35 Using secret token for csrf signing
	2024/10/28 12:28:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/28 12:28:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/28 12:28:35 Successful initial request to the apiserver, version: v1.20.0
	2024/10/28 12:28:35 Generating JWE encryption key
	2024/10/28 12:28:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/28 12:28:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/28 12:28:36 Initializing JWE encryption key from synchronized object
	2024/10/28 12:28:36 Creating in-cluster Sidecar client
	2024/10/28 12:28:36 Serving insecurely on HTTP port: 9090
	2024/10/28 12:28:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:29:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:29:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:30:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:30:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:31:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:31:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:32:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:32:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:33:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/28 12:33:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7fe2f6b267f7] <==
	I1028 12:28:31.640178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:28:31.664592       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:28:31.664659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:28:49.178833       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:28:49.179385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-013200_606f44e7-9b65-48a0-8d1f-8b7a4cae4c33!
	I1028 12:28:49.179505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e15c77a-9fa8-4ab9-b1d3-d7123f53f0ea", APIVersion:"v1", ResourceVersion:"824", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-013200_606f44e7-9b65-48a0-8d1f-8b7a4cae4c33 became leader
	I1028 12:28:49.280728       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-013200_606f44e7-9b65-48a0-8d1f-8b7a4cae4c33!
	
	
	==> storage-provisioner [befcb830733f] <==
	I1028 12:27:55.222065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 12:28:16.298090       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013200 -n old-k8s-version-013200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013200 -n old-k8s-version-013200: (1.0666469s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-013200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-kgknk dashboard-metrics-scraper-8d5bb5db8-q2nvm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-013200 describe pod metrics-server-9975d5f86-kgknk dashboard-metrics-scraper-8d5bb5db8-q2nvm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-013200 describe pod metrics-server-9975d5f86-kgknk dashboard-metrics-scraper-8d5bb5db8-q2nvm: exit status 1 (565.5825ms)

                                                
                                                
** stderr ** 
	E1028 12:33:45.329507   11664 memcache.go:287] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E1028 12:33:45.419485   11664 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E1028 12:33:45.460252   11664 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E1028 12:33:45.491668   11664 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	Error from server (NotFound): pods "metrics-server-9975d5f86-kgknk" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-q2nvm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-013200 describe pod metrics-server-9975d5f86-kgknk dashboard-metrics-scraper-8d5bb5db8-q2nvm: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (409.75s)

                                                
                                    

Test pass (313/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.1
4 TestDownloadOnly/v1.20.0/preload-exists 0.07
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 1.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.98
12 TestDownloadOnly/v1.31.2/json-events 5.64
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.28
18 TestDownloadOnly/v1.31.2/DeleteAll 1.38
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 1.01
20 TestDownloadOnlyKic 3.34
21 TestBinaryMirror 2.89
22 TestOffline 113.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.27
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.26
27 TestAddons/Setup 517.58
29 TestAddons/serial/Volcano 57.19
31 TestAddons/serial/GCPAuth/Namespaces 0.34
32 TestAddons/serial/GCPAuth/FakeCredentials 10.68
37 TestAddons/parallel/InspektorGadget 52.41
38 TestAddons/parallel/MetricsServer 9.36
40 TestAddons/parallel/CSI 56.72
41 TestAddons/parallel/Headlamp 31.48
42 TestAddons/parallel/CloudSpanner 7.13
43 TestAddons/parallel/LocalPath 61.86
44 TestAddons/parallel/NvidiaDevicePlugin 6.96
45 TestAddons/parallel/Yakd 12.52
46 TestAddons/parallel/AmdGpuDevicePlugin 8.38
47 TestAddons/StoppedEnableDisable 13.36
48 TestCertOptions 95.24
49 TestCertExpiration 312.84
50 TestDockerFlags 85.22
51 TestForceSystemdFlag 98.71
52 TestForceSystemdEnv 101.98
59 TestErrorSpam/start 3.62
60 TestErrorSpam/status 3.07
61 TestErrorSpam/pause 3.27
62 TestErrorSpam/unpause 3.26
63 TestErrorSpam/stop 20.4
66 TestFunctional/serial/CopySyncFile 0.03
67 TestFunctional/serial/StartWithProxy 92.77
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 44.15
70 TestFunctional/serial/KubeContext 0.13
71 TestFunctional/serial/KubectlGetPods 0.22
74 TestFunctional/serial/CacheCmd/cache/add_remote 6.41
75 TestFunctional/serial/CacheCmd/cache/add_local 3.48
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.3
77 TestFunctional/serial/CacheCmd/cache/list 0.27
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.81
79 TestFunctional/serial/CacheCmd/cache/cache_reload 3.96
80 TestFunctional/serial/CacheCmd/cache/delete 0.6
81 TestFunctional/serial/MinikubeKubectlCmd 0.5
83 TestFunctional/serial/ExtraConfig 47.56
84 TestFunctional/serial/ComponentHealth 0.18
85 TestFunctional/serial/LogsCmd 2.66
86 TestFunctional/serial/LogsFileCmd 2.97
87 TestFunctional/serial/InvalidService 5.32
89 TestFunctional/parallel/ConfigCmd 1.87
91 TestFunctional/parallel/DryRun 2.22
92 TestFunctional/parallel/InternationalLanguage 1.13
93 TestFunctional/parallel/StatusCmd 4.07
98 TestFunctional/parallel/AddonsCmd 0.86
99 TestFunctional/parallel/PersistentVolumeClaim 42.76
101 TestFunctional/parallel/SSHCmd 1.48
102 TestFunctional/parallel/CpCmd 5.86
103 TestFunctional/parallel/MySQL 77.29
104 TestFunctional/parallel/FileSync 0.68
105 TestFunctional/parallel/CertSync 4.97
109 TestFunctional/parallel/NodeLabels 0.24
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
113 TestFunctional/parallel/License 2.82
114 TestFunctional/parallel/ServiceCmd/DeployApp 20.45
115 TestFunctional/parallel/ProfileCmd/profile_not_create 1.47
116 TestFunctional/parallel/ProfileCmd/profile_list 1.6
117 TestFunctional/parallel/ProfileCmd/profile_json_output 1.8
118 TestFunctional/parallel/DockerEnv/powershell 7.13
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.49
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.48
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.48
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.97
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 30.52
127 TestFunctional/parallel/ServiceCmd/List 1.11
128 TestFunctional/parallel/ServiceCmd/JSONOutput 1.1
129 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
130 TestFunctional/parallel/ServiceCmd/Format 15.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.18
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
137 TestFunctional/parallel/ServiceCmd/URL 15.01
138 TestFunctional/parallel/Version/short 0.26
139 TestFunctional/parallel/Version/components 1.53
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.57
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.83
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.67
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.69
144 TestFunctional/parallel/ImageCommands/ImageBuild 6.84
145 TestFunctional/parallel/ImageCommands/Setup 1.92
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.88
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.83
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.6
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.8
150 TestFunctional/parallel/ImageCommands/ImageRemove 1.7
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.18
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.48
153 TestFunctional/delete_echo-server_images 0.21
154 TestFunctional/delete_my-image_image 0.1
155 TestFunctional/delete_minikube_cached_images 0.09
159 TestMultiControlPlane/serial/StartCluster 207.7
160 TestMultiControlPlane/serial/DeployApp 25.63
161 TestMultiControlPlane/serial/PingHostFromPods 3.63
162 TestMultiControlPlane/serial/AddWorkerNode 56.3
163 TestMultiControlPlane/serial/NodeLabels 0.2
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 3
165 TestMultiControlPlane/serial/CopyFile 45.65
166 TestMultiControlPlane/serial/StopSecondaryNode 14.05
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 2.17
168 TestMultiControlPlane/serial/RestartSecondaryNode 150.19
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.69
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 253.32
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.66
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.04
173 TestMultiControlPlane/serial/StopCluster 36.38
174 TestMultiControlPlane/serial/RestartCluster 157.3
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.04
176 TestMultiControlPlane/serial/AddSecondaryNode 76.88
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.81
180 TestImageBuild/serial/Setup 60.38
181 TestImageBuild/serial/NormalBuild 5.27
182 TestImageBuild/serial/BuildWithBuildArg 2.32
183 TestImageBuild/serial/BuildWithDockerIgnore 1.54
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.66
188 TestJSONOutput/start/Command 97.78
189 TestJSONOutput/start/Audit 0.05
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 1.37
195 TestJSONOutput/pause/Audit 0.06
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 1.22
201 TestJSONOutput/unpause/Audit 0.06
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.17
207 TestJSONOutput/stop/Audit 0.05
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.95
213 TestKicCustomNetwork/create_custom_network 68.46
214 TestKicCustomNetwork/use_default_bridge_network 67.71
215 TestKicExistingNetwork 69.4
216 TestKicCustomSubnet 69.06
217 TestKicStaticIP 70.09
218 TestMainNoArgs 0.25
219 TestMinikubeProfile 136.91
222 TestMountStart/serial/StartWithMountFirst 17.55
223 TestMountStart/serial/VerifyMountFirst 0.74
224 TestMountStart/serial/StartWithMountSecond 16.63
225 TestMountStart/serial/VerifyMountSecond 0.71
226 TestMountStart/serial/DeleteFirst 2.77
227 TestMountStart/serial/VerifyMountPostDelete 0.72
228 TestMountStart/serial/Stop 1.99
229 TestMountStart/serial/RestartStopped 11.83
230 TestMountStart/serial/VerifyMountPostStop 0.71
233 TestMultiNode/serial/FreshStart2Nodes 147.14
234 TestMultiNode/serial/DeployApp2Nodes 40.75
235 TestMultiNode/serial/PingHostFrom2Pods 2.49
236 TestMultiNode/serial/AddNode 48.69
237 TestMultiNode/serial/MultiNodeLabels 0.24
238 TestMultiNode/serial/ProfileList 2.04
239 TestMultiNode/serial/CopyFile 26.8
240 TestMultiNode/serial/StopNode 4.71
241 TestMultiNode/serial/StartAfterStop 18.39
242 TestMultiNode/serial/RestartKeepsNodes 114.48
243 TestMultiNode/serial/DeleteNode 9.91
244 TestMultiNode/serial/StopMultiNode 24.27
245 TestMultiNode/serial/RestartMultiNode 67.74
246 TestMultiNode/serial/ValidateNameConflict 66.32
250 TestPreload 158
251 TestScheduledStopWindows 130.32
255 TestInsufficientStorage 42.07
256 TestRunningBinaryUpgrade 194.64
258 TestKubernetesUpgrade 239.89
259 TestMissingContainerUpgrade 333.7
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.32
273 TestNoKubernetes/serial/StartWithK8s 93.64
274 TestNoKubernetes/serial/StartWithStopK8s 30.33
275 TestStoppedBinaryUpgrade/Setup 0.85
276 TestStoppedBinaryUpgrade/Upgrade 316.14
277 TestNoKubernetes/serial/Start 28.18
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.97
279 TestNoKubernetes/serial/ProfileList 5.39
280 TestNoKubernetes/serial/Stop 2.37
281 TestNoKubernetes/serial/StartNoArgs 14.41
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.89
290 TestStoppedBinaryUpgrade/MinikubeLogs 4.25
292 TestPause/serial/Start 108.02
293 TestNetworkPlugins/group/auto/Start 98.59
294 TestNetworkPlugins/group/kindnet/Start 125.77
295 TestPause/serial/SecondStartNoReconfiguration 45.57
296 TestPause/serial/Pause 1.44
297 TestPause/serial/VerifyStatus 0.9
298 TestPause/serial/Unpause 1.43
299 TestPause/serial/PauseAgain 1.64
300 TestPause/serial/DeletePaused 5.52
301 TestPause/serial/VerifyDeletedResources 4.5
302 TestNetworkPlugins/group/calico/Start 176.45
303 TestNetworkPlugins/group/custom-flannel/Start 111.66
304 TestNetworkPlugins/group/auto/KubeletFlags 0.84
305 TestNetworkPlugins/group/auto/NetCatPod 30.13
306 TestNetworkPlugins/group/auto/DNS 0.4
307 TestNetworkPlugins/group/auto/Localhost 0.32
308 TestNetworkPlugins/group/auto/HairPin 0.33
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.91
311 TestNetworkPlugins/group/kindnet/NetCatPod 19.63
312 TestNetworkPlugins/group/kindnet/DNS 0.51
313 TestNetworkPlugins/group/kindnet/Localhost 0.55
314 TestNetworkPlugins/group/kindnet/HairPin 0.45
315 TestNetworkPlugins/group/false/Start 120.95
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.25
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 18.11
318 TestNetworkPlugins/group/custom-flannel/DNS 0.52
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.35
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.41
321 TestNetworkPlugins/group/enable-default-cni/Start 80.81
322 TestNetworkPlugins/group/calico/ControllerPod 6.01
323 TestNetworkPlugins/group/calico/KubeletFlags 0.8
324 TestNetworkPlugins/group/calico/NetCatPod 22.64
325 TestNetworkPlugins/group/flannel/Start 110
326 TestNetworkPlugins/group/calico/DNS 0.49
327 TestNetworkPlugins/group/calico/Localhost 0.34
328 TestNetworkPlugins/group/calico/HairPin 0.35
329 TestNetworkPlugins/group/false/KubeletFlags 0.93
330 TestNetworkPlugins/group/false/NetCatPod 21.47
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.06
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 18.67
333 TestNetworkPlugins/group/false/DNS 0.5
334 TestNetworkPlugins/group/false/Localhost 0.33
335 TestNetworkPlugins/group/false/HairPin 0.35
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.38
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.31
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.39
339 TestNetworkPlugins/group/bridge/Start 116.61
340 TestNetworkPlugins/group/kubenet/Start 129.59
342 TestStartStop/group/old-k8s-version/serial/FirstStart 226.91
343 TestNetworkPlugins/group/flannel/ControllerPod 6.01
344 TestNetworkPlugins/group/flannel/KubeletFlags 0.77
345 TestNetworkPlugins/group/flannel/NetCatPod 29.63
346 TestNetworkPlugins/group/flannel/DNS 0.38
347 TestNetworkPlugins/group/flannel/Localhost 0.34
348 TestNetworkPlugins/group/flannel/HairPin 0.34
349 TestNetworkPlugins/group/bridge/KubeletFlags 2.42
350 TestNetworkPlugins/group/bridge/NetCatPod 21.78
352 TestStartStop/group/no-preload/serial/FirstStart 135.94
353 TestNetworkPlugins/group/bridge/DNS 0.39
354 TestNetworkPlugins/group/bridge/Localhost 0.34
355 TestNetworkPlugins/group/bridge/HairPin 0.32
356 TestNetworkPlugins/group/kubenet/KubeletFlags 1.03
357 TestNetworkPlugins/group/kubenet/NetCatPod 22.87
358 TestNetworkPlugins/group/kubenet/DNS 0.41
359 TestNetworkPlugins/group/kubenet/Localhost 0.41
360 TestNetworkPlugins/group/kubenet/HairPin 0.36
362 TestStartStop/group/embed-certs/serial/FirstStart 120.55
364 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 104.09
365 TestStartStop/group/old-k8s-version/serial/DeployApp 11.11
366 TestStartStop/group/no-preload/serial/DeployApp 11.02
367 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.57
368 TestStartStop/group/old-k8s-version/serial/Stop 12.41
369 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.78
370 TestStartStop/group/no-preload/serial/Stop 12.36
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.78
373 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.93
374 TestStartStop/group/no-preload/serial/SecondStart 295.71
375 TestStartStop/group/embed-certs/serial/DeployApp 14.85
376 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.43
377 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.89
378 TestStartStop/group/embed-certs/serial/Stop 13.01
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.53
380 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.13
381 TestStartStop/group/embed-certs/serial/SecondStart 294.55
382 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.07
383 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.27
384 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 292.79
385 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.02
386 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.35
387 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.62
388 TestStartStop/group/no-preload/serial/Pause 7.02
390 TestStartStop/group/newest-cni/serial/FirstStart 72.58
391 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
392 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.36
393 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.7
394 TestStartStop/group/embed-certs/serial/Pause 7.37
395 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.4
397 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.67
398 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.17
399 TestStartStop/group/newest-cni/serial/DeployApp 0
400 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.45
401 TestStartStop/group/newest-cni/serial/Stop 7.64
402 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.83
403 TestStartStop/group/newest-cni/serial/SecondStart 32.53
404 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
405 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.48
406 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.67
407 TestStartStop/group/old-k8s-version/serial/Pause 7.44
408 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.63
411 TestStartStop/group/newest-cni/serial/Pause 8.79
x
+
TestDownloadOnly/v1.20.0/json-events (9.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-583200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-583200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (9.0969066s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 10:59:53.378097   11176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1028 10:59:53.443973   11176 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-583200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-583200: exit status 85 (285.8846ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-583200 | minikube4\jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |          |
	|         | -p download-only-583200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:59:44
	Running on machine: minikube4
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:59:44.380800    7152 out.go:345] Setting OutFile to fd 744 ...
	I1028 10:59:44.448815    7152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:44.449778    7152 out.go:358] Setting ErrFile to fd 748...
	I1028 10:59:44.449778    7152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1028 10:59:44.461809    7152 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1028 10:59:44.473975    7152 out.go:352] Setting JSON to true
	I1028 10:59:44.476686    7152 start.go:129] hostinfo: {"hostname":"minikube4","uptime":281,"bootTime":1730112903,"procs":215,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5073 Build 19045.5073","kernelVersion":"10.0.19045.5073 Build 19045.5073","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1028 10:59:44.476686    7152 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 10:59:44.481408    7152 out.go:97] [download-only-583200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	I1028 10:59:44.484326    7152 notify.go:220] Checking for updates...
	W1028 10:59:44.484326    7152 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1028 10:59:44.487902    7152 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 10:59:44.489525    7152 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1028 10:59:44.492194    7152 out.go:169] MINIKUBE_LOCATION=19875
	I1028 10:59:44.497701    7152 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1028 10:59:44.501862    7152 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:59:44.502845    7152 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:59:44.668570    7152 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.2 (167172)
	I1028 10:59:44.676570    7152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:45.954967    7152 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2783935s)
	I1028 10:59:45.955964    7152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:66 SystemTime:2024-10-28 10:59:45.93178294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 10:59:45.959964    7152 out.go:97] Using the docker driver based on user configuration
	I1028 10:59:45.959964    7152 start.go:297] selected driver: docker
	I1028 10:59:45.959964    7152 start.go:901] validating driver "docker" against <nil>
	I1028 10:59:45.980537    7152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:46.304973    7152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:66 SystemTime:2024-10-28 10:59:46.276326144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 10:59:46.305565    7152 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:59:46.358369    7152 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1028 10:59:46.359077    7152 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:59:46.362784    7152 out.go:169] Using Docker Desktop driver with root privileges
	I1028 10:59:46.365705    7152 cni.go:84] Creating CNI manager for ""
	I1028 10:59:46.365705    7152 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1028 10:59:46.365705    7152 start.go:340] cluster config:
	{Name:download-only-583200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-583200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:59:46.368311    7152 out.go:97] Starting "download-only-583200" primary control-plane node in "download-only-583200" cluster
	I1028 10:59:46.368311    7152 cache.go:121] Beginning downloading kic base image for docker with docker
	I1028 10:59:46.370734    7152 out.go:97] Pulling base image v0.0.45-1729876044-19868 ...
	I1028 10:59:46.370808    7152 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 10:59:46.370808    7152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 10:59:46.442311    7152 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:59:46.442311    7152 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1729876044-19868@sha256_98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e.tar
	I1028 10:59:46.442311    7152 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1729876044-19868@sha256_98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e.tar
	I1028 10:59:46.442311    7152 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1028 10:59:46.443309    7152 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1028 10:59:46.443309    7152 cache.go:56] Caching tarball of preloaded images
	I1028 10:59:46.443309    7152 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:59:46.443309    7152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1028 10:59:46.445324    7152 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 10:59:46.445324    7152 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:59:46.517968    7152 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-583200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-583200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3404044s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-583200
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-174200 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-174200 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=docker: (5.6436234s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 11:00:01.703771   11176 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1028 11:00:01.703771   11176 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-174200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-174200: exit status 85 (275.8424ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-583200 | minikube4\jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | -p download-only-583200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube4\jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| delete  | -p download-only-583200        | download-only-583200 | minikube4\jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	| start   | -o=json --download-only        | download-only-174200 | minikube4\jenkins | v1.34.0 | 28 Oct 24 10:59 UTC |                     |
	|         | -p download-only-174200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:59:56
	Running on machine: minikube4
	Binary: Built with gc go1.23.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:59:56.163176   16316 out.go:345] Setting OutFile to fd 920 ...
	I1028 10:59:56.235241   16316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:56.235241   16316 out.go:358] Setting ErrFile to fd 924...
	I1028 10:59:56.235307   16316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:59:56.259306   16316 out.go:352] Setting JSON to true
	I1028 10:59:56.262420   16316 start.go:129] hostinfo: {"hostname":"minikube4","uptime":292,"bootTime":1730112903,"procs":215,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5073 Build 19045.5073","kernelVersion":"10.0.19045.5073 Build 19045.5073","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1028 10:59:56.262551   16316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 10:59:56.266880   16316 out.go:97] [download-only-174200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	I1028 10:59:56.266880   16316 notify.go:220] Checking for updates...
	I1028 10:59:56.269184   16316 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 10:59:56.271813   16316 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1028 10:59:56.273104   16316 out.go:169] MINIKUBE_LOCATION=19875
	I1028 10:59:56.277520   16316 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1028 10:59:56.287436   16316 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:59:56.288067   16316 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:59:56.463862   16316 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.2 (167172)
	I1028 10:59:56.474506   16316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:56.783399   16316 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:66 SystemTime:2024-10-28 10:59:56.759222872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 10:59:57.125014   16316 out.go:97] Using the docker driver based on user configuration
	I1028 10:59:57.125014   16316 start.go:297] selected driver: docker
	I1028 10:59:57.125358   16316 start.go:901] validating driver "docker" against <nil>
	I1028 10:59:57.144634   16316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 10:59:57.451255   16316 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:66 SystemTime:2024-10-28 10:59:57.429589278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 10:59:57.451255   16316 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:59:57.497614   16316 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1028 10:59:57.498455   16316 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:59:57.502840   16316 out.go:169] Using Docker Desktop driver with root privileges
	I1028 10:59:57.505393   16316 cni.go:84] Creating CNI manager for ""
	I1028 10:59:57.505393   16316 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1028 10:59:57.505393   16316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 10:59:57.505393   16316 start.go:340] cluster config:
	{Name:download-only-174200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-174200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:59:57.507656   16316 out.go:97] Starting "download-only-174200" primary control-plane node in "download-only-174200" cluster
	I1028 10:59:57.507656   16316 cache.go:121] Beginning downloading kic base image for docker with docker
	I1028 10:59:57.510468   16316 out.go:97] Pulling base image v0.0.45-1729876044-19868 ...
	I1028 10:59:57.510468   16316 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 10:59:57.510468   16316 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1028 10:59:57.563846   16316 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 10:59:57.563846   16316 cache.go:56] Caching tarball of preloaded images
	I1028 10:59:57.563978   16316 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 10:59:57.567362   16316 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 10:59:57.567462   16316 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:59:57.593262   16316 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1028 10:59:57.593711   16316 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1729876044-19868@sha256_98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e.tar
	I1028 10:59:57.593895   16316 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1729876044-19868@sha256_98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e.tar
	I1028 10:59:57.593895   16316 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1028 10:59:57.593895   16316 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory, skipping pull
	I1028 10:59:57.593895   16316 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in cache, skipping pull
	I1028 10:59:57.593895   16316 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e as a tarball
	I1028 10:59:57.635392   16316 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4?checksum=md5:979f32540b837894423b337fec69fbf6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1028 10:59:59.931977   16316 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1028 10:59:59.932942   16316 preload.go:254] verifying checksum of C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1028 11:00:00.740142   16316 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1028 11:00:00.741030   16316 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-174200\config.json ...
	I1028 11:00:00.741520   16316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-174200\config.json: {Name:mkfbf10365928c0f9d0c243643a39b1ab652e42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:00:00.741916   16316 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1028 11:00:00.742800   16316 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.31.2/kubectl.exe
	
	
	* The control-plane node download-only-174200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-174200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (1.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3760598s)
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (1.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (1.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-174200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-174200: (1.0133163s)
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (1.01s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.34s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-481100 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-481100 --alsologtostderr --driver=docker: (1.6263969s)
helpers_test.go:175: Cleaning up "download-docker-481100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-481100
--- PASS: TestDownloadOnlyKic (3.34s)

                                                
                                    
x
+
TestBinaryMirror (2.89s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 11:00:09.345547   11176 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-043900 --alsologtostderr --binary-mirror http://127.0.0.1:58746 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-043900 --alsologtostderr --binary-mirror http://127.0.0.1:58746 --driver=docker: (1.7819145s)
helpers_test.go:175: Cleaning up "binary-mirror-043900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-043900
--- PASS: TestBinaryMirror (2.89s)

                                                
                                    
x
+
TestOffline (113.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-592300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-592300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (1m46.8270645s)
helpers_test.go:175: Cleaning up "offline-docker-592300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-592300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-592300: (6.5202553s)
--- PASS: TestOffline (113.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-740500
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-740500: exit status 85 (269.1335ms)

                                                
                                                
-- stdout --
	* Profile "addons-740500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-740500"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.26s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-740500
addons_test.go:950: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-740500: exit status 85 (256.3528ms)

                                                
                                                
-- stdout --
	* Profile "addons-740500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-740500"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.26s)

                                                
                                    
x
+
TestAddons/Setup (517.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-740500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-740500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (8m37.5747999s)
--- PASS: TestAddons/Setup (517.58s)

                                                
                                    
x
+
TestAddons/serial/Volcano (57.19s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 24.8302ms
addons_test.go:815: volcano-admission stabilized in 24.9446ms
addons_test.go:807: volcano-scheduler stabilized in 24.9941ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-n9tkl" [bb04054e-f718-479d-b489-09749ff2a149] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0074056s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-wdhql" [1cf13e10-7e66-4625-a6a9-4f85110c833d] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0082861s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-jc9g9" [94f8f99a-f135-4d65-9639-c0aeccbcbe0c] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0079863s
addons_test.go:842: (dbg) Run:  kubectl --context addons-740500 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-740500 create -f testdata\vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-740500 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [21ed53c2-581a-4675-b4d5-207c2b098c5d] Pending
helpers_test.go:344: "test-job-nginx-0" [21ed53c2-581a-4675-b4d5-207c2b098c5d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [21ed53c2-581a-4675-b4d5-207c2b098c5d] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 26.0085273s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable volcano --alsologtostderr -v=1: (12.2171719s)
--- PASS: TestAddons/serial/Volcano (57.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-740500 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-740500 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.68s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-740500 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-740500 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [770d47a1-17c6-4e48-9df9-7d07b3d1d629] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [770d47a1-17c6-4e48-9df9-7d07b3d1d629] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.0070684s
addons_test.go:633: (dbg) Run:  kubectl --context addons-740500 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-740500 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-740500 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-740500 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (52.41s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fxfrn" [90301d0d-91e8-4386-8bad-5dc0b970415c] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0120511s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable inspektor-gadget --alsologtostderr -v=1: (46.3971611s)
--- PASS: TestAddons/parallel/InspektorGadget (52.41s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (9.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 9.3995ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-hfzc9" [21b14e71-0ae8-4bc1-a5c6-9d474b676f5e] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0084649s
addons_test.go:402: (dbg) Run:  kubectl --context addons-740500 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable metrics-server --alsologtostderr -v=1: (4.1007424s)
--- PASS: TestAddons/parallel/MetricsServer (9.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1028 11:10:15.587695   11176 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1028 11:10:15.667853   11176 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 11:10:15.667853   11176 kapi.go:107] duration metric: took 80.1572ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 80.1572ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-740500 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-740500 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fd509ca2-9fa3-4477-8934-5f563160b5cc] Pending
helpers_test.go:344: "task-pv-pod" [fd509ca2-9fa3-4477-8934-5f563160b5cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fd509ca2-9fa3-4477-8934-5f563160b5cc] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.0824502s
addons_test.go:511: (dbg) Run:  kubectl --context addons-740500 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-740500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-740500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-740500 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-740500 delete pod task-pv-pod: (2.6214599s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-740500 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-740500 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-740500 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f126c603-0b85-4e64-8c2f-007f44471693] Pending
helpers_test.go:344: "task-pv-pod-restore" [f126c603-0b85-4e64-8c2f-007f44471693] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f126c603-0b85-4e64-8c2f-007f44471693] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0152947s
addons_test.go:553: (dbg) Run:  kubectl --context addons-740500 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-740500 delete pod task-pv-pod-restore: (2.0546391s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-740500 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-740500 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable volumesnapshots --alsologtostderr -v=1: (2.1511511s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.9021577s)
--- PASS: TestAddons/parallel/CSI (56.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (31.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-740500 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-740500 --alsologtostderr -v=1: (1.696405s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-z5lnf" [cdea04d3-3263-4e42-bcea-986559092af2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-z5lnf" [cdea04d3-3263-4e42-bcea-986559092af2] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.0202722s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable headlamp --alsologtostderr -v=1: (6.7596846s)
--- PASS: TestAddons/parallel/Headlamp (31.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-7skcw" [df478d02-f021-49f3-bd8b-b6c274a1968f] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0099197s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable cloud-spanner --alsologtostderr -v=1: (1.1083172s)
--- PASS: TestAddons/parallel/CloudSpanner (7.13s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (61.86s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-740500 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-740500 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-740500 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f682b730-fa35-428d-8448-f425b16effd2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f682b730-fa35-428d-8448-f425b16effd2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f682b730-fa35-428d-8448-f425b16effd2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.0069363s
addons_test.go:906: (dbg) Run:  kubectl --context addons-740500 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 ssh "cat /opt/local-path-provisioner/pvc-b771a89e-dd0a-4e6a-8ea0-63789ab53e63_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-740500 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-740500 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (45.5743004s)
--- PASS: TestAddons/parallel/LocalPath (61.86s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.96s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8drdl" [5e0fd79b-6007-4379-95e2-a522475e5dfc] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0202111s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.9387201s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.96s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-s2t27" [079a7c76-6ffe-423d-80ab-68c2d609cd3a] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0089212s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable yakd --alsologtostderr -v=1: (6.5122211s)
--- PASS: TestAddons/parallel/Yakd (12.52s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (8.38s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-vl9s2" [1a7434b2-8c68-4f10-950c-7de7d0a60308] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.0074201s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (2.3738479s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (8.38s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-740500
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-740500: (12.1662274s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-740500
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-740500
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-740500
--- PASS: TestAddons/StoppedEnableDisable (13.36s)

                                                
                                    
x
+
TestCertOptions (95.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-720600 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-720600 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m24.3994523s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-720600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1028 12:16:30.293717   11176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-720600
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-720600 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-720600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-720600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-720600: (8.8249495s)
--- PASS: TestCertOptions (95.24s)

                                                
                                    
x
+
TestCertExpiration (312.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-920600 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-920600 --memory=2048 --cert-expiration=3m --driver=docker: (1m19.789473s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-920600 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-920600 --memory=2048 --cert-expiration=8760h --driver=docker: (44.4607389s)
helpers_test.go:175: Cleaning up "cert-expiration-920600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-920600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-920600: (8.5890248s)
--- PASS: TestCertExpiration (312.84s)

                                                
                                    
x
+
TestDockerFlags (85.22s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-946700 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-946700 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m15.1203161s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-946700 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-946700 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.1183165s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-946700 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-946700 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.2440288s)
helpers_test.go:175: Cleaning up "docker-flags-946700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-946700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-946700: (7.7394535s)
--- PASS: TestDockerFlags (85.22s)

                                                
                                    
x
+
TestForceSystemdFlag (98.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-592300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-592300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m32.045287s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-592300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-592300 ssh "docker info --format {{.CgroupDriver}}": (1.2394694s)
helpers_test.go:175: Cleaning up "force-systemd-flag-592300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-592300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-592300: (5.4249917s)
--- PASS: TestForceSystemdFlag (98.71s)

                                                
                                    
x
+
TestForceSystemdEnv (101.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-897900 --memory=2048 --alsologtostderr -v=5 --driver=docker
E1028 12:08:50.198277   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-897900 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m34.6972055s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-897900 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-897900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-897900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-897900: (6.3819171s)
--- PASS: TestForceSystemdEnv (101.98s)

                                                
                                    
x
+
TestErrorSpam/start (3.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 start --dry-run: (1.1717443s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 start --dry-run: (1.1888643s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 start --dry-run: (1.2527097s)
--- PASS: TestErrorSpam/start (3.62s)

                                                
                                    
x
+
TestErrorSpam/status (3.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 status: (1.2978813s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 status
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 status
--- PASS: TestErrorSpam/status (3.07s)

                                                
                                    
x
+
TestErrorSpam/pause (3.27s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 pause: (1.4442434s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 pause
--- PASS: TestErrorSpam/pause (3.27s)

                                                
                                    
x
+
TestErrorSpam/unpause (3.26s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 unpause: (1.1732906s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 unpause: (1.1068864s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 unpause
--- PASS: TestErrorSpam/unpause (3.26s)

                                                
                                    
x
+
TestErrorSpam/stop (20.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 stop: (12.0799819s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 stop: (4.0846571s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-883200 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-883200 stop: (4.2323744s)
--- PASS: TestErrorSpam/stop (20.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11176\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (92.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-928900 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E1028 11:13:50.114484   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:50.121472   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:50.133325   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:50.155488   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:50.197871   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:50.280158   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:50.442456   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:50.765227   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:51.407768   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:52.693587   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:13:55.255916   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:14:00.378566   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:14:10.620698   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:14:31.102717   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:15:12.065412   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-928900 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m32.7622318s)
--- PASS: TestFunctional/serial/StartWithProxy (92.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 11:15:12.187140   11176 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-928900 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-928900 --alsologtostderr -v=8: (44.1469911s)
functional_test.go:663: soft start took 44.148267s for "functional-928900" cluster.
I1028 11:15:56.335530   11176 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (44.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-928900 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 cache add registry.k8s.io/pause:3.1: (2.3434322s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 cache add registry.k8s.io/pause:3.3: (2.0598009s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 cache add registry.k8s.io/pause:latest: (2.0068982s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-928900 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local914071660\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-928900 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local914071660\001: (1.5455685s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cache add minikube-local-cache-test:functional-928900
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 cache add minikube-local-cache-test:functional-928900: (1.5101877s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cache delete minikube-local-cache-test:functional-928900
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-928900
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-928900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (827.3661ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 cache reload: (1.5518449s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.60s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 kubectl -- --context functional-928900 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-928900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1028 11:16:33.989075   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-928900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.5602581s)
functional_test.go:761: restart took 47.5602581s for "functional-928900" cluster.
I1028 11:17:06.089684   11176 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (47.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-928900 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 logs
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 logs: (2.658277s)
--- PASS: TestFunctional/serial/LogsCmd (2.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2682299610\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2682299610\001\logs.txt: (2.9625451s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.97s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-928900 apply -f testdata\invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-928900
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-928900: exit status 115 (1.1281759s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32735 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-928900 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-928900 config get cpus: exit status 14 (275.602ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-928900 config get cpus: exit status 14 (249.0048ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-928900 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-928900 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (932.3603ms)

                                                
                                                
-- stdout --
	* [functional-928900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:17:23.201368   11524 out.go:345] Setting OutFile to fd 1480 ...
	I1028 11:17:23.288370   11524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:17:23.288370   11524 out.go:358] Setting ErrFile to fd 1492...
	I1028 11:17:23.288370   11524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:17:23.310380   11524 out.go:352] Setting JSON to false
	I1028 11:17:23.316378   11524 start.go:129] hostinfo: {"hostname":"minikube4","uptime":1340,"bootTime":1730112903,"procs":208,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5073 Build 19045.5073","kernelVersion":"10.0.19045.5073 Build 19045.5073","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1028 11:17:23.316378   11524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 11:17:23.320373   11524 out.go:177] * [functional-928900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	I1028 11:17:23.324361   11524 notify.go:220] Checking for updates...
	I1028 11:17:23.326370   11524 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 11:17:23.328368   11524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:17:23.331379   11524 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1028 11:17:23.333369   11524 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:17:23.335373   11524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:17:23.338380   11524 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:17:23.340368   11524 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:17:23.539374   11524 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.2 (167172)
	I1028 11:17:23.548383   11524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:17:23.903340   11524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:81 SystemTime:2024-10-28 11:17:23.877080421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 11:17:23.906990   11524 out.go:177] * Using the docker driver based on existing profile
	I1028 11:17:23.908971   11524 start.go:297] selected driver: docker
	I1028 11:17:23.908971   11524 start.go:901] validating driver "docker" against &{Name:functional-928900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-928900 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:17:23.908971   11524 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:17:23.963113   11524 out.go:201] 
	W1028 11:17:23.966121   11524 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 11:17:23.968102   11524 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-928900 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:991: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-928900 --dry-run --alsologtostderr -v=1 --driver=docker: (1.2871595s)
--- PASS: TestFunctional/parallel/DryRun (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-928900 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-928900 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.1319094s)

                                                
                                                
-- stdout --
	* [functional-928900] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:17:22.248658   12972 out.go:345] Setting OutFile to fd 1324 ...
	I1028 11:17:22.364167   12972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:17:22.364167   12972 out.go:358] Setting ErrFile to fd 1276...
	I1028 11:17:22.364167   12972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:17:22.472144   12972 out.go:352] Setting JSON to false
	I1028 11:17:22.478151   12972 start.go:129] hostinfo: {"hostname":"minikube4","uptime":1339,"bootTime":1730112903,"procs":208,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5073 Build 19045.5073","kernelVersion":"10.0.19045.5073 Build 19045.5073","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1028 11:17:22.478151   12972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1028 11:17:22.482178   12972 out.go:177] * [functional-928900] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	I1028 11:17:22.486165   12972 notify.go:220] Checking for updates...
	I1028 11:17:22.488141   12972 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1028 11:17:22.490136   12972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:17:22.493147   12972 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1028 11:17:22.495146   12972 out.go:177]   - MINIKUBE_LOCATION=19875
	I1028 11:17:22.497150   12972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:17:22.503152   12972 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:17:22.504143   12972 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:17:22.732169   12972 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.2 (167172)
	I1028 11:17:22.743152   12972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1028 11:17:23.075379   12972 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:81 SystemTime:2024-10-28 11:17:23.051957468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I1028 11:17:23.081359   12972 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1028 11:17:23.083365   12972 start.go:297] selected driver: docker
	I1028 11:17:23.083365   12972 start.go:901] validating driver "docker" against &{Name:functional-928900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-928900 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:17:23.083365   12972 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:17:23.157429   12972 out.go:201] 
	W1028 11:17:23.159420   12972 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 11:17:23.162370   12972 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 status
functional_test.go:854: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 status: (1.3837565s)
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.2955163s)
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 status -o json
functional_test.go:872: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 status -o json: (1.3919124s)
--- PASS: TestFunctional/parallel/StatusCmd (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b9804189-e6ae-4e66-9f4c-ec6a9431e6b3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0091654s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-928900 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-928900 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-928900 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-928900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3c633cdd-4e47-4046-b133-45a4fd80eb3e] Pending
helpers_test.go:344: "sp-pod" [3c633cdd-4e47-4046-b133-45a4fd80eb3e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3c633cdd-4e47-4046-b133-45a4fd80eb3e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.0101537s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-928900 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-928900 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-928900 delete -f testdata/storage-provisioner/pod.yaml: (1.6823638s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-928900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [35535636-8485-4724-ab12-b73e056c15c8] Pending
helpers_test.go:344: "sp-pod" [35535636-8485-4724-ab12-b73e056c15c8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [35535636-8485-4724-ab12-b73e056c15c8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0077788s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-928900 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (5.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh -n functional-928900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cp functional-928900:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd1510927318\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 cp functional-928900:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd1510927318\001\cp-test.txt: (1.0499855s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh -n functional-928900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 ssh -n functional-928900 "sudo cat /home/docker/cp-test.txt": (1.1009883s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (1.0042753s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh -n functional-928900 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 ssh -n functional-928900 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.0909s)
--- PASS: TestFunctional/parallel/CpCmd (5.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (77.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-928900 replace --force -f testdata\mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-tfrs5" [ef525116-ebf2-460a-8077-47a1c0fbaa8e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-tfrs5" [ef525116-ebf2-460a-8077-47a1c0fbaa8e] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m3.0071469s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;": exit status 1 (272.0948ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:19:14.701441   11176 retry.go:31] will retry after 726.456836ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;": exit status 1 (300.1919ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:19:15.738136   11176 retry.go:31] will retry after 2.146270392s: exit status 1
E1028 11:19:17.834164   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1807: (dbg) Run:  kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;": exit status 1 (347.6416ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:19:18.245224   11176 retry.go:31] will retry after 1.413645276s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;": exit status 1 (255.9672ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:19:19.925625   11176 retry.go:31] will retry after 3.814864562s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;": exit status 1 (362.1073ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:19:24.111390   11176 retry.go:31] will retry after 3.811694259s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-928900 exec mysql-6cdb49bbb-tfrs5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (77.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11176/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo cat /etc/test/nested/copy/11176/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11176.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo cat /etc/ssl/certs/11176.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11176.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo cat /usr/share/ca-certificates/11176.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/111762.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo cat /etc/ssl/certs/111762.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/111762.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo cat /usr/share/ca-certificates/111762.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.97s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-928900 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-928900 ssh "sudo systemctl is-active crio": exit status 1 (721.0311ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (2.8020968s)
--- PASS: TestFunctional/parallel/License (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-928900 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-928900 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-9tgxr" [d0c722b5-18e7-4147-8304-59ce4c957acd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-9tgxr" [d0c722b5-18e7-4147-8304-59ce4c957acd] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.0090704s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.0871088s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.2153309s)
functional_test.go:1315: Took "1.2153309s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "381.9927ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.439991s)
functional_test.go:1366: Took "1.4409818s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "356.2923ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-928900 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-928900"
functional_test.go:499: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-928900 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-928900": (4.4086725s)
functional_test.go:522: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-928900 docker-env | Invoke-Expression ; docker images"
functional_test.go:522: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-928900 docker-env | Invoke-Expression ; docker images": (2.7104144s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-928900 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-928900 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-928900 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9036: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-928900 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-928900 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (30.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-928900 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b776fab6-d6f8-4357-bc35-cdc0ac722002] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b776fab6-d6f8-4357-bc35-cdc0ac722002] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 30.0081496s
I1028 11:18:05.416611   11176 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (30.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 service list
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 service list: (1.1145769s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 service list -o json: (1.1004948s)
functional_test.go:1494: Took "1.1004948s" to run "out/minikube-windows-amd64.exe -p functional-928900 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-928900 service --namespace=default --https --url hello-node: exit status 1 (15.0143787s)

                                                
                                                
-- stdout --
	https://127.0.0.1:59835

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1522: found endpoint: https://127.0.0.1:59835
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-928900 service hello-node --url --format={{.IP}}: exit status 1 (15.0109306s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-928900 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-928900 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8060: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 15400: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-928900 service hello-node --url: exit status 1 (15.0111177s)

                                                
                                                
-- stdout --
	http://127.0.0.1:59889

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1565: found endpoint for hello-node: http://127.0.0.1:59889
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 version --short
--- PASS: TestFunctional/parallel/Version/short (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 version -o=json --components: (1.528861s)
--- PASS: TestFunctional/parallel/Version/components (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-928900 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-928900
docker.io/kicbase/echo-server:functional-928900
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-928900 image ls --format short --alsologtostderr:
I1028 11:18:32.785192    1804 out.go:345] Setting OutFile to fd 760 ...
I1028 11:18:32.855091    1804 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:32.855091    1804 out.go:358] Setting ErrFile to fd 1348...
I1028 11:18:32.855091    1804 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:32.870026    1804 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:32.870026    1804 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:32.887022    1804 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
I1028 11:18:32.964029    1804 ssh_runner.go:195] Run: systemctl --version
I1028 11:18:32.972031    1804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
I1028 11:18:33.041655    1804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
I1028 11:18:33.171109    1804 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-928900 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.2           | 9499c9960544e | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 505d571f5fd56 | 91.5MB |
| docker.io/kicbase/echo-server               | functional-928900 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 0486b6c53a1b5 | 88.4MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| localhost/my-image                          | functional-928900 | 2f7b7ef4f5f5d | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-928900 | 4ac573abe3ffa | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.2           | 847c7bc1a5418 | 67.4MB |
| docker.io/library/nginx                     | latest            | 3b25b682ea82b | 192MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine            | cb8f91112b6b5 | 47MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-928900 image ls --format table --alsologtostderr:
I1028 11:18:39.133313    4596 out.go:345] Setting OutFile to fd 1228 ...
I1028 11:18:39.212305    4596 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:39.212305    4596 out.go:358] Setting ErrFile to fd 1080...
I1028 11:18:39.212305    4596 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:39.232825    4596 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:39.233487    4596 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:39.261000    4596 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
I1028 11:18:39.354982    4596 ssh_runner.go:195] Run: systemctl --version
I1028 11:18:39.365994    4596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
I1028 11:18:39.448043    4596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
I1028 11:18:39.620974    4596 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E1028 11:18:50.117981   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-928900 image ls --format json --alsologtostderr:
[{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-928900"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"94200000"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"88400000"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c
254417254efd94c803f7dd718045","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/ech
oserver:1.8"],"size":"95400000"},{"id":"2f7b7ef4f5f5d84bb9e96086acbfb6fde83ababf448dc22cfc27056c34afaf7f","repoDigests":[],"repoTags":["localhost/my-image:functional-928900"],"size":"1240000"},{"id":"4ac573abe3ffa69605b14b2f488d576611a8faf39ce0bc92b46dcd0f1cf499b1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-928900"],"size":"30"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-928900 image ls --format json --alsologtostderr:
I1028 11:18:38.471766   13816 out.go:345] Setting OutFile to fd 1684 ...
I1028 11:18:38.551731   13816 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:38.551731   13816 out.go:358] Setting ErrFile to fd 1692...
I1028 11:18:38.551731   13816 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:38.569914   13816 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:38.570448   13816 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:38.587140   13816 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
I1028 11:18:38.686104   13816 ssh_runner.go:195] Run: systemctl --version
I1028 11:18:38.696095   13816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
I1028 11:18:38.772316   13816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
I1028 11:18:38.912276   13816 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-928900 image ls --format yaml --alsologtostderr:
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-928900
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "91500000"
- id: cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 4ac573abe3ffa69605b14b2f488d576611a8faf39ce0bc92b46dcd0f1cf499b1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-928900
size: "30"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "94200000"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "88400000"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-928900 image ls --format yaml --alsologtostderr:
I1028 11:18:37.749481    1580 out.go:345] Setting OutFile to fd 1432 ...
I1028 11:18:37.826483    1580 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:37.826483    1580 out.go:358] Setting ErrFile to fd 1156...
I1028 11:18:37.826483    1580 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:37.845483    1580 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:37.845483    1580 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:37.869626    1580 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
I1028 11:18:37.973179    1580 ssh_runner.go:195] Run: systemctl --version
I1028 11:18:37.990837    1580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
I1028 11:18:38.072421    1580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
I1028 11:18:38.211664    1580 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-928900 ssh pgrep buildkitd: exit status 1 (753.6592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image build -t localhost/my-image:functional-928900 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 image build -t localhost/my-image:functional-928900 testdata\build --alsologtostderr: (5.4115737s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-928900 image build -t localhost/my-image:functional-928900 testdata\build --alsologtostderr:
I1028 11:18:34.117493    9392 out.go:345] Setting OutFile to fd 1312 ...
I1028 11:18:34.221254    9392 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:34.221254    9392 out.go:358] Setting ErrFile to fd 1504...
I1028 11:18:34.221254    9392 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:18:34.237122    9392 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:34.253362    9392 config.go:182] Loaded profile config "functional-928900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1028 11:18:34.272640    9392 cli_runner.go:164] Run: docker container inspect functional-928900 --format={{.State.Status}}
I1028 11:18:34.370598    9392 ssh_runner.go:195] Run: systemctl --version
I1028 11:18:34.378604    9392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-928900
I1028 11:18:34.451591    9392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59547 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-928900\id_rsa Username:docker}
I1028 11:18:34.581262    9392 build_images.go:161] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3835401563.tar
I1028 11:18:34.591780    9392 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 11:18:34.624116    9392 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3835401563.tar
I1028 11:18:34.639481    9392 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3835401563.tar: stat -c "%s %y" /var/lib/minikube/build/build.3835401563.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3835401563.tar': No such file or directory
I1028 11:18:34.639707    9392 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3835401563.tar --> /var/lib/minikube/build/build.3835401563.tar (3072 bytes)
I1028 11:18:34.742879    9392 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3835401563
I1028 11:18:34.833002    9392 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3835401563 -xf /var/lib/minikube/build/build.3835401563.tar
I1028 11:18:34.853884    9392 docker.go:360] Building image: /var/lib/minikube/build/build.3835401563
I1028 11:18:34.864969    9392 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-928900 /var/lib/minikube/build/build.3835401563
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B 0.0s done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.3s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 writing image sha256:2f7b7ef4f5f5d84bb9e96086acbfb6fde83ababf448dc22cfc27056c34afaf7f
#8 writing image sha256:2f7b7ef4f5f5d84bb9e96086acbfb6fde83ababf448dc22cfc27056c34afaf7f done
#8 naming to localhost/my-image:functional-928900 0.0s done
#8 DONE 0.3s
I1028 11:18:39.256974    9392 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-928900 /var/lib/minikube/build/build.3835401563: (4.3914747s)
I1028 11:18:39.273985    9392 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3835401563
I1028 11:18:39.321122    9392 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3835401563.tar
I1028 11:18:39.344980    9392 build_images.go:217] Built localhost/my-image:functional-928900 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3835401563.tar
I1028 11:18:39.344980    9392 build_images.go:133] succeeded building to: functional-928900
I1028 11:18:39.344980    9392 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7938186s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-928900
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image load --daemon kicbase/echo-server:functional-928900 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 image load --daemon kicbase/echo-server:functional-928900 --alsologtostderr: (1.9953747s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image load --daemon kicbase/echo-server:functional-928900 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 image load --daemon kicbase/echo-server:functional-928900 --alsologtostderr: (2.0051059s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-928900
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image load --daemon kicbase/echo-server:functional-928900 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 image load --daemon kicbase/echo-server:functional-928900 --alsologtostderr: (1.7874492s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image save kicbase/echo-server:functional-928900 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 image save kicbase/echo-server:functional-928900 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.8023237s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image rm kicbase/echo-server:functional-928900 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 image rm kicbase/echo-server:functional-928900 --alsologtostderr: (1.0004773s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.3945108s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-928900
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-928900 image save --daemon kicbase/echo-server:functional-928900 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-928900 image save --daemon kicbase/echo-server:functional-928900 --alsologtostderr: (2.2735457s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-928900
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.21s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-928900
--- PASS: TestFunctional/delete_echo-server_images (0.21s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-928900
--- PASS: TestFunctional/delete_my-image_image (0.10s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-928900
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-389500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
E1028 11:23:50.122577   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-389500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (3m25.526759s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr: (2.1692703s)
--- PASS: TestMultiControlPlane/serial/StartCluster (207.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (25.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-389500 -- rollout status deployment/busybox: (15.6102094s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-62j9w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-62j9w -- nslookup kubernetes.io: (1.8999095s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-dlklq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-dlklq -- nslookup kubernetes.io: (1.5974335s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-szl8s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-szl8s -- nslookup kubernetes.io: (1.5662956s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-62j9w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-dlklq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-szl8s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-62j9w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-dlklq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-szl8s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (25.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-62j9w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-62j9w -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-dlklq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-dlklq -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-szl8s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-389500 -- exec busybox-7dff88458-szl8s -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-389500 -v=7 --alsologtostderr
E1028 11:27:17.665943   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:17.673942   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:17.686954   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:17.709936   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:17.752940   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:17.835958   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:17.998954   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:18.320370   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:27:18.963289   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-389500 -v=7 --alsologtostderr: (53.486892s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr
E1028 11:27:20.245608   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr: (2.810479s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-389500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E1028 11:27:22.807909   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.9958734s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (3.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (45.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status --output json -v=7 --alsologtostderr
E1028 11:27:27.930137   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 status --output json -v=7 --alsologtostderr: (2.6047091s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp testdata\cp-test.txt ha-389500:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4075123566\001\cp-test_ha-389500.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500:/home/docker/cp-test.txt ha-389500-m02:/home/docker/cp-test_ha-389500_ha-389500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500:/home/docker/cp-test.txt ha-389500-m02:/home/docker/cp-test_ha-389500_ha-389500-m02.txt: (1.0632955s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m02 "sudo cat /home/docker/cp-test_ha-389500_ha-389500-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500:/home/docker/cp-test.txt ha-389500-m03:/home/docker/cp-test_ha-389500_ha-389500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500:/home/docker/cp-test.txt ha-389500-m03:/home/docker/cp-test_ha-389500_ha-389500-m03.txt: (1.1203251s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m03 "sudo cat /home/docker/cp-test_ha-389500_ha-389500-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500:/home/docker/cp-test.txt ha-389500-m04:/home/docker/cp-test_ha-389500_ha-389500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500:/home/docker/cp-test.txt ha-389500-m04:/home/docker/cp-test_ha-389500_ha-389500-m04.txt: (1.085055s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500 "sudo cat /home/docker/cp-test.txt"
E1028 11:27:38.172692   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m04 "sudo cat /home/docker/cp-test_ha-389500_ha-389500-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp testdata\cp-test.txt ha-389500-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4075123566\001\cp-test_ha-389500-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m02:/home/docker/cp-test.txt ha-389500:/home/docker/cp-test_ha-389500-m02_ha-389500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m02:/home/docker/cp-test.txt ha-389500:/home/docker/cp-test_ha-389500-m02_ha-389500.txt: (1.1039696s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500 "sudo cat /home/docker/cp-test_ha-389500-m02_ha-389500.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m02:/home/docker/cp-test.txt ha-389500-m03:/home/docker/cp-test_ha-389500-m02_ha-389500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m02:/home/docker/cp-test.txt ha-389500-m03:/home/docker/cp-test_ha-389500-m02_ha-389500-m03.txt: (1.0474621s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m03 "sudo cat /home/docker/cp-test_ha-389500-m02_ha-389500-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m02:/home/docker/cp-test.txt ha-389500-m04:/home/docker/cp-test_ha-389500-m02_ha-389500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m02:/home/docker/cp-test.txt ha-389500-m04:/home/docker/cp-test_ha-389500-m02_ha-389500-m04.txt: (1.0462935s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m04 "sudo cat /home/docker/cp-test_ha-389500-m02_ha-389500-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp testdata\cp-test.txt ha-389500-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4075123566\001\cp-test_ha-389500-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m03:/home/docker/cp-test.txt ha-389500:/home/docker/cp-test_ha-389500-m03_ha-389500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m03:/home/docker/cp-test.txt ha-389500:/home/docker/cp-test_ha-389500-m03_ha-389500.txt: (1.1013574s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500 "sudo cat /home/docker/cp-test_ha-389500-m03_ha-389500.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m03:/home/docker/cp-test.txt ha-389500-m02:/home/docker/cp-test_ha-389500-m03_ha-389500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m03:/home/docker/cp-test.txt ha-389500-m02:/home/docker/cp-test_ha-389500-m03_ha-389500-m02.txt: (1.1134013s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m02 "sudo cat /home/docker/cp-test_ha-389500-m03_ha-389500-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m03:/home/docker/cp-test.txt ha-389500-m04:/home/docker/cp-test_ha-389500-m03_ha-389500-m04.txt
E1028 11:27:58.655841   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m03:/home/docker/cp-test.txt ha-389500-m04:/home/docker/cp-test_ha-389500-m03_ha-389500-m04.txt: (1.1060384s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m04 "sudo cat /home/docker/cp-test_ha-389500-m03_ha-389500-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp testdata\cp-test.txt ha-389500-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4075123566\001\cp-test_ha-389500-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m04:/home/docker/cp-test.txt ha-389500:/home/docker/cp-test_ha-389500-m04_ha-389500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m04:/home/docker/cp-test.txt ha-389500:/home/docker/cp-test_ha-389500-m04_ha-389500.txt: (1.0533888s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500 "sudo cat /home/docker/cp-test_ha-389500-m04_ha-389500.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m04:/home/docker/cp-test.txt ha-389500-m02:/home/docker/cp-test_ha-389500-m04_ha-389500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m04:/home/docker/cp-test.txt ha-389500-m02:/home/docker/cp-test_ha-389500-m04_ha-389500-m02.txt: (1.0846908s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m02 "sudo cat /home/docker/cp-test_ha-389500-m04_ha-389500-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m04:/home/docker/cp-test.txt ha-389500-m03:/home/docker/cp-test_ha-389500-m04_ha-389500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 cp ha-389500-m04:/home/docker/cp-test.txt ha-389500-m03:/home/docker/cp-test_ha-389500-m04_ha-389500-m03.txt: (1.1022379s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 ssh -n ha-389500-m03 "sudo cat /home/docker/cp-test_ha-389500-m04_ha-389500-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (45.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 node stop m02 -v=7 --alsologtostderr: (11.980614s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr: exit status 7 (2.0726625s)

                                                
                                                
-- stdout --
	ha-389500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-389500-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-389500-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-389500-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:28:23.292538   11476 out.go:345] Setting OutFile to fd 1040 ...
	I1028 11:28:23.362542   11476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:28:23.362542   11476 out.go:358] Setting ErrFile to fd 1888...
	I1028 11:28:23.362542   11476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:28:23.376112   11476 out.go:352] Setting JSON to false
	I1028 11:28:23.376112   11476 mustload.go:65] Loading cluster: ha-389500
	I1028 11:28:23.376112   11476 notify.go:220] Checking for updates...
	I1028 11:28:23.376701   11476 config.go:182] Loaded profile config "ha-389500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:28:23.377231   11476 status.go:174] checking status of ha-389500 ...
	I1028 11:28:23.397048   11476 cli_runner.go:164] Run: docker container inspect ha-389500 --format={{.State.Status}}
	I1028 11:28:23.469345   11476 status.go:371] ha-389500 host status = "Running" (err=<nil>)
	I1028 11:28:23.469345   11476 host.go:66] Checking if "ha-389500" exists ...
	I1028 11:28:23.478366   11476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-389500
	I1028 11:28:23.550475   11476 host.go:66] Checking if "ha-389500" exists ...
	I1028 11:28:23.561479   11476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:28:23.569473   11476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-389500
	I1028 11:28:23.640482   11476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59997 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-389500\id_rsa Username:docker}
	I1028 11:28:23.784661   11476 ssh_runner.go:195] Run: systemctl --version
	I1028 11:28:23.810520   11476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:28:23.851686   11476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-389500
	I1028 11:28:23.944714   11476 kubeconfig.go:125] found "ha-389500" server: "https://127.0.0.1:59996"
	I1028 11:28:23.944714   11476 api_server.go:166] Checking apiserver status ...
	I1028 11:28:23.954725   11476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:28:23.993688   11476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2477/cgroup
	I1028 11:28:24.012657   11476 api_server.go:182] apiserver freezer: "7:freezer:/docker/ff792e3c6d9ec2cb698dff32f1d9ee6ac63adc8d5cd4ba600cfb613420abad5d/kubepods/burstable/podd8adc5c100132d8fad838edc69d16028/61454f6257a0f3dc892d7f0860038d1b9c6014bffe8d4a5ea493f9fb709fb70a"
	I1028 11:28:24.024239   11476 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ff792e3c6d9ec2cb698dff32f1d9ee6ac63adc8d5cd4ba600cfb613420abad5d/kubepods/burstable/podd8adc5c100132d8fad838edc69d16028/61454f6257a0f3dc892d7f0860038d1b9c6014bffe8d4a5ea493f9fb709fb70a/freezer.state
	I1028 11:28:24.043486   11476 api_server.go:204] freezer state: "THAWED"
	I1028 11:28:24.044861   11476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59996/healthz ...
	I1028 11:28:24.060230   11476 api_server.go:279] https://127.0.0.1:59996/healthz returned 200:
	ok
	I1028 11:28:24.060230   11476 status.go:463] ha-389500 apiserver status = Running (err=<nil>)
	I1028 11:28:24.060230   11476 status.go:176] ha-389500 status: &{Name:ha-389500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:28:24.060230   11476 status.go:174] checking status of ha-389500-m02 ...
	I1028 11:28:24.076893   11476 cli_runner.go:164] Run: docker container inspect ha-389500-m02 --format={{.State.Status}}
	I1028 11:28:24.150994   11476 status.go:371] ha-389500-m02 host status = "Stopped" (err=<nil>)
	I1028 11:28:24.151122   11476 status.go:384] host is not running, skipping remaining checks
	I1028 11:28:24.151122   11476 status.go:176] ha-389500-m02 status: &{Name:ha-389500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:28:24.151122   11476 status.go:174] checking status of ha-389500-m03 ...
	I1028 11:28:24.170800   11476 cli_runner.go:164] Run: docker container inspect ha-389500-m03 --format={{.State.Status}}
	I1028 11:28:24.245791   11476 status.go:371] ha-389500-m03 host status = "Running" (err=<nil>)
	I1028 11:28:24.245853   11476 host.go:66] Checking if "ha-389500-m03" exists ...
	I1028 11:28:24.254213   11476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-389500-m03
	I1028 11:28:24.320532   11476 host.go:66] Checking if "ha-389500-m03" exists ...
	I1028 11:28:24.333922   11476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:28:24.342214   11476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-389500-m03
	I1028 11:28:24.417036   11476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-389500-m03\id_rsa Username:docker}
	I1028 11:28:24.555915   11476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:28:24.589774   11476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-389500
	I1028 11:28:24.671004   11476 kubeconfig.go:125] found "ha-389500" server: "https://127.0.0.1:59996"
	I1028 11:28:24.671037   11476 api_server.go:166] Checking apiserver status ...
	I1028 11:28:24.682922   11476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:28:24.723468   11476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2298/cgroup
	I1028 11:28:24.750150   11476 api_server.go:182] apiserver freezer: "7:freezer:/docker/5033ab9e88a2d2c7ff3100b877bcb192f24c14208c77a6d68b0891e53eacdee5/kubepods/burstable/pod2a18e8d5e726dffc3e163d1cef894701/a14c7ac9a430dc51f2a3eacb4902a7d359c67c8c30bcea32fd2405bd48e46841"
	I1028 11:28:24.760142   11476 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5033ab9e88a2d2c7ff3100b877bcb192f24c14208c77a6d68b0891e53eacdee5/kubepods/burstable/pod2a18e8d5e726dffc3e163d1cef894701/a14c7ac9a430dc51f2a3eacb4902a7d359c67c8c30bcea32fd2405bd48e46841/freezer.state
	I1028 11:28:24.779652   11476 api_server.go:204] freezer state: "THAWED"
	I1028 11:28:24.779652   11476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59996/healthz ...
	I1028 11:28:24.792654   11476 api_server.go:279] https://127.0.0.1:59996/healthz returned 200:
	ok
	I1028 11:28:24.792654   11476 status.go:463] ha-389500-m03 apiserver status = Running (err=<nil>)
	I1028 11:28:24.792654   11476 status.go:176] ha-389500-m03 status: &{Name:ha-389500-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:28:24.792654   11476 status.go:174] checking status of ha-389500-m04 ...
	I1028 11:28:24.810359   11476 cli_runner.go:164] Run: docker container inspect ha-389500-m04 --format={{.State.Status}}
	I1028 11:28:24.882798   11476 status.go:371] ha-389500-m04 host status = "Running" (err=<nil>)
	I1028 11:28:24.882798   11476 host.go:66] Checking if "ha-389500-m04" exists ...
	I1028 11:28:24.894841   11476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-389500-m04
	I1028 11:28:24.976661   11476 host.go:66] Checking if "ha-389500-m04" exists ...
	I1028 11:28:24.992619   11476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:28:25.000746   11476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-389500-m04
	I1028 11:28:25.068446   11476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-389500-m04\id_rsa Username:docker}
	I1028 11:28:25.197340   11476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:28:25.219842   11476 status.go:176] ha-389500-m04 status: &{Name:ha-389500-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.1734554s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (150.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 node start m02 -v=7 --alsologtostderr
E1028 11:28:39.619906   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:28:50.128598   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:30:01.543655   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:30:13.208572   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 node start m02 -v=7 --alsologtostderr: (2m27.4696726s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr: (2.5478668s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (150.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.6945936s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (253.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-389500 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-389500 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-389500 -v=7 --alsologtostderr: (37.7848024s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-389500 --wait=true -v=7 --alsologtostderr
E1028 11:32:17.672281   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:32:45.389172   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:33:50.135707   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-389500 --wait=true -v=7 --alsologtostderr: (3m35.0675725s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-389500
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (253.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 node delete m03 -v=7 --alsologtostderr: (14.1945702s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr: (1.9734776s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0405892s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 stop -v=7 --alsologtostderr: (35.9141676s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr: exit status 7 (460.4591ms)

                                                
                                                
-- stdout --
	ha-389500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-389500-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-389500-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:36:08.363410   12248 out.go:345] Setting OutFile to fd 1596 ...
	I1028 11:36:08.429413   12248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:36:08.429413   12248 out.go:358] Setting ErrFile to fd 1868...
	I1028 11:36:08.429413   12248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:36:08.442413   12248 out.go:352] Setting JSON to false
	I1028 11:36:08.442413   12248 mustload.go:65] Loading cluster: ha-389500
	I1028 11:36:08.442413   12248 notify.go:220] Checking for updates...
	I1028 11:36:08.442413   12248 config.go:182] Loaded profile config "ha-389500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:36:08.442413   12248 status.go:174] checking status of ha-389500 ...
	I1028 11:36:08.462403   12248 cli_runner.go:164] Run: docker container inspect ha-389500 --format={{.State.Status}}
	I1028 11:36:08.528447   12248 status.go:371] ha-389500 host status = "Stopped" (err=<nil>)
	I1028 11:36:08.529446   12248 status.go:384] host is not running, skipping remaining checks
	I1028 11:36:08.529446   12248 status.go:176] ha-389500 status: &{Name:ha-389500 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:36:08.529446   12248 status.go:174] checking status of ha-389500-m02 ...
	I1028 11:36:08.545411   12248 cli_runner.go:164] Run: docker container inspect ha-389500-m02 --format={{.State.Status}}
	I1028 11:36:08.609416   12248 status.go:371] ha-389500-m02 host status = "Stopped" (err=<nil>)
	I1028 11:36:08.609416   12248 status.go:384] host is not running, skipping remaining checks
	I1028 11:36:08.609416   12248 status.go:176] ha-389500-m02 status: &{Name:ha-389500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:36:08.609416   12248 status.go:174] checking status of ha-389500-m04 ...
	I1028 11:36:08.624410   12248 cli_runner.go:164] Run: docker container inspect ha-389500-m04 --format={{.State.Status}}
	I1028 11:36:08.686464   12248 status.go:371] ha-389500-m04 host status = "Stopped" (err=<nil>)
	I1028 11:36:08.686464   12248 status.go:384] host is not running, skipping remaining checks
	I1028 11:36:08.686464   12248 status.go:176] ha-389500-m04 status: &{Name:ha-389500-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (157.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-389500 --wait=true -v=7 --alsologtostderr --driver=docker
E1028 11:37:17.679150   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-389500 --wait=true -v=7 --alsologtostderr --driver=docker: (2m34.8787859s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr: (2.0213538s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (157.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0395966s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-389500 --control-plane -v=7 --alsologtostderr
E1028 11:38:50.143177   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-389500 --control-plane -v=7 --alsologtostderr: (1m14.0998754s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-389500 status -v=7 --alsologtostderr: (2.7823853s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.8085378s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.81s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (60.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-802100 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-802100 --driver=docker: (1m0.3828159s)
--- PASS: TestImageBuild/serial/Setup (60.38s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (5.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-802100
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-802100: (5.274622s)
--- PASS: TestImageBuild/serial/NormalBuild (5.27s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-802100
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-802100: (2.3169585s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.32s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-802100
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-802100: (1.5377757s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-802100
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-802100: (1.6573851s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-470300 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E1028 11:42:17.687810   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-470300 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m37.7766383s)
--- PASS: TestJSONOutput/start/Command (97.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.37s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-470300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-470300 --output=json --user=testUser: (1.3741296s)
--- PASS: TestJSONOutput/pause/Command (1.37s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.06s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-470300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-470300 --output=json --user=testUser: (1.2229387s)
--- PASS: TestJSONOutput/unpause/Command (1.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.06s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-470300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-470300 --output=json --user=testUser: (7.1740467s)
--- PASS: TestJSONOutput/stop/Command (7.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.05s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.95s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-215600 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-215600 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (253.6691ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"02d8e521-18d8-4bf5-bbde-148ec2286378","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-215600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c179d6b5-cc1d-4cb3-9e69-9827abddba75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"a08501fd-57a7-480c-b7a7-a07ec449ef9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bd07b6b4-e5c1-4dea-9b9a-43fac4ae3c4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"d9a00733-070f-4c75-9202-0ec5ecfa2f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19875"}}
	{"specversion":"1.0","id":"e7e0dd34-69e4-4f14-9f20-24edd71094c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0524b9f5-e51f-45a5-8191-b6b1f6cf8a26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-215600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-215600
--- PASS: TestErrorJSONOutput (0.95s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (68.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-247700 --network=
E1028 11:43:40.767937   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 11:43:50.150539   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-247700 --network=: (1m4.6576366s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-247700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-247700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-247700: (3.7184094s)
--- PASS: TestKicCustomNetwork/create_custom_network (68.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (67.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-839100 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-839100 --network=bridge: (1m4.1871194s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-839100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-839100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-839100: (3.4340938s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (67.71s)

                                                
                                    
x
+
TestKicExistingNetwork (69.4s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1028 11:45:47.752431   11176 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1028 11:45:47.828899   11176 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1028 11:45:47.839032   11176 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1028 11:45:47.839032   11176 cli_runner.go:164] Run: docker network inspect existing-network
W1028 11:45:47.913668   11176 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1028 11:45:47.913668   11176 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1028 11:45:47.913668   11176 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1028 11:45:47.924130   11176 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 11:45:48.017317   11176 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d0b560}
I1028 11:45:48.017866   11176 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1028 11:45:48.027852   11176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1028 11:45:48.100270   11176 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1028 11:45:48.100270   11176 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1028 11:45:48.100270   11176 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1028 11:45:48.131436   11176 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1028 11:45:48.149543   11176 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000cdee70}
I1028 11:45:48.149543   11176 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1028 11:45:48.158833   11176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1028 11:45:48.346893   11176 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-973400 --network=existing-network
E1028 11:46:53.236385   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-973400 --network=existing-network: (1m5.1948482s)
helpers_test.go:175: Cleaning up "existing-network-973400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-973400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-973400: (3.4356447s)
I1028 11:46:57.070279   11176 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (69.40s)

                                                
                                    
x
+
TestKicCustomSubnet (69.06s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-789200 --subnet=192.168.60.0/24
E1028 11:47:17.697265   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-789200 --subnet=192.168.60.0/24: (1m5.1194804s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-789200 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-789200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-789200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-789200: (3.8547179s)
--- PASS: TestKicCustomSubnet (69.06s)

                                                
                                    
x
+
TestKicStaticIP (70.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-563800 --static-ip=192.168.200.200
E1028 11:48:50.159301   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-563800 --static-ip=192.168.200.200: (1m5.7810134s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-563800 ip
helpers_test.go:175: Cleaning up "static-ip-563800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-563800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-563800: (3.8697878s)
--- PASS: TestKicStaticIP (70.09s)

                                                
                                    
x
+
TestMainNoArgs (0.25s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.25s)

                                                
                                    
x
+
TestMinikubeProfile (136.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-382600 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-382600 --driver=docker: (1m2.4383356s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-382600 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-382600 --driver=docker: (1m1.0023802s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-382600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.6949441s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-382600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.0572059s)
helpers_test.go:175: Cleaning up "second-382600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-382600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-382600: (4.0111196s)
helpers_test.go:175: Cleaning up "first-382600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-382600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-382600: (5.0447961s)
--- PASS: TestMinikubeProfile (136.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (17.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-797700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-797700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (16.551018s)
--- PASS: TestMountStart/serial/StartWithMountFirst (17.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.74s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-797700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-797700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-797700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (15.6300998s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-797700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.71s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.77s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-797700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-797700 --alsologtostderr -v=5: (2.7694121s)
--- PASS: TestMountStart/serial/DeleteFirst (2.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-797700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.72s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.99s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-797700
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-797700: (1.9851596s)
--- PASS: TestMountStart/serial/Stop (1.99s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (11.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-797700
E1028 11:52:17.704706   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-797700: (10.8290327s)
--- PASS: TestMountStart/serial/RestartStopped (11.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-797700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (147.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-461300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E1028 11:53:50.167693   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-461300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m25.5868841s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr: (1.5539654s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (147.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (40.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- rollout status deployment/busybox: (33.8886345s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-smglj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-smglj -- nslookup kubernetes.io: (1.7438447s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-swclq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-swclq -- nslookup kubernetes.io: (1.5302603s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-smglj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-swclq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-smglj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-swclq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (40.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-smglj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-smglj -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-swclq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-461300 -- exec busybox-7dff88458-swclq -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-461300 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-461300 -v 3 --alsologtostderr: (46.30848s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr: (2.3758241s)
--- PASS: TestMultiNode/serial/AddNode (48.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-461300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.037109s)
--- PASS: TestMultiNode/serial/ProfileList (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (26.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 status --output json --alsologtostderr: (1.9665611s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp testdata\cp-test.txt multinode-461300:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1009179210\001\cp-test_multinode-461300.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300:/home/docker/cp-test.txt multinode-461300-m02:/home/docker/cp-test_multinode-461300_multinode-461300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300:/home/docker/cp-test.txt multinode-461300-m02:/home/docker/cp-test_multinode-461300_multinode-461300-m02.txt: (1.1125913s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m02 "sudo cat /home/docker/cp-test_multinode-461300_multinode-461300-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300:/home/docker/cp-test.txt multinode-461300-m03:/home/docker/cp-test_multinode-461300_multinode-461300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300:/home/docker/cp-test.txt multinode-461300-m03:/home/docker/cp-test_multinode-461300_multinode-461300-m03.txt: (1.087909s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m03 "sudo cat /home/docker/cp-test_multinode-461300_multinode-461300-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp testdata\cp-test.txt multinode-461300-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1009179210\001\cp-test_multinode-461300-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m02:/home/docker/cp-test.txt multinode-461300:/home/docker/cp-test_multinode-461300-m02_multinode-461300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m02:/home/docker/cp-test.txt multinode-461300:/home/docker/cp-test_multinode-461300-m02_multinode-461300.txt: (1.079977s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300 "sudo cat /home/docker/cp-test_multinode-461300-m02_multinode-461300.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m02:/home/docker/cp-test.txt multinode-461300-m03:/home/docker/cp-test_multinode-461300-m02_multinode-461300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m02:/home/docker/cp-test.txt multinode-461300-m03:/home/docker/cp-test_multinode-461300-m02_multinode-461300-m03.txt: (1.0990254s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m03 "sudo cat /home/docker/cp-test_multinode-461300-m02_multinode-461300-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp testdata\cp-test.txt multinode-461300-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1009179210\001\cp-test_multinode-461300-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m03:/home/docker/cp-test.txt multinode-461300:/home/docker/cp-test_multinode-461300-m03_multinode-461300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m03:/home/docker/cp-test.txt multinode-461300:/home/docker/cp-test_multinode-461300-m03_multinode-461300.txt: (1.0873177s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300 "sudo cat /home/docker/cp-test_multinode-461300-m03_multinode-461300.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m03:/home/docker/cp-test.txt multinode-461300-m02:/home/docker/cp-test_multinode-461300-m03_multinode-461300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 cp multinode-461300-m03:/home/docker/cp-test.txt multinode-461300-m02:/home/docker/cp-test_multinode-461300-m03_multinode-461300-m02.txt: (1.086645s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 ssh -n multinode-461300-m02 "sudo cat /home/docker/cp-test_multinode-461300-m03_multinode-461300-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (26.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 node stop m03: (1.8241591s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-461300 status: exit status 7 (1.4171159s)

                                                
                                                
-- stdout --
	multinode-461300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-461300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-461300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr: exit status 7 (1.4696635s)

                                                
                                                
-- stdout --
	multinode-461300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-461300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-461300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:57:02.195417    2396 out.go:345] Setting OutFile to fd 1696 ...
	I1028 11:57:02.275426    2396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:57:02.275426    2396 out.go:358] Setting ErrFile to fd 1752...
	I1028 11:57:02.275426    2396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:57:02.288506    2396 out.go:352] Setting JSON to false
	I1028 11:57:02.288506    2396 mustload.go:65] Loading cluster: multinode-461300
	I1028 11:57:02.288506    2396 notify.go:220] Checking for updates...
	I1028 11:57:02.290285    2396 config.go:182] Loaded profile config "multinode-461300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:57:02.290392    2396 status.go:174] checking status of multinode-461300 ...
	I1028 11:57:02.309844    2396 cli_runner.go:164] Run: docker container inspect multinode-461300 --format={{.State.Status}}
	I1028 11:57:02.388975    2396 status.go:371] multinode-461300 host status = "Running" (err=<nil>)
	I1028 11:57:02.389616    2396 host.go:66] Checking if "multinode-461300" exists ...
	I1028 11:57:02.401982    2396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461300
	I1028 11:57:02.472166    2396 host.go:66] Checking if "multinode-461300" exists ...
	I1028 11:57:02.484390    2396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:57:02.492439    2396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461300
	I1028 11:57:02.579982    2396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61527 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-461300\id_rsa Username:docker}
	I1028 11:57:02.712176    2396 ssh_runner.go:195] Run: systemctl --version
	I1028 11:57:02.739076    2396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:57:02.776303    2396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-461300
	I1028 11:57:02.855109    2396 kubeconfig.go:125] found "multinode-461300" server: "https://127.0.0.1:61526"
	I1028 11:57:02.855184    2396 api_server.go:166] Checking apiserver status ...
	I1028 11:57:02.866563    2396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:57:02.906809    2396 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2510/cgroup
	I1028 11:57:02.926696    2396 api_server.go:182] apiserver freezer: "7:freezer:/docker/50cd89e823286460de07eff8af21b1ad87819670693d558a58f98fa6daefa1eb/kubepods/burstable/podd3a190f9d75b01042f11c1adb871047b/e432f90d4c433b089cd944ce3632bc6f6cd4c9c3a6ebfa2333d96b186513ff31"
	I1028 11:57:02.940043    2396 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/50cd89e823286460de07eff8af21b1ad87819670693d558a58f98fa6daefa1eb/kubepods/burstable/podd3a190f9d75b01042f11c1adb871047b/e432f90d4c433b089cd944ce3632bc6f6cd4c9c3a6ebfa2333d96b186513ff31/freezer.state
	I1028 11:57:02.960295    2396 api_server.go:204] freezer state: "THAWED"
	I1028 11:57:02.960327    2396 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61526/healthz ...
	I1028 11:57:02.976340    2396 api_server.go:279] https://127.0.0.1:61526/healthz returned 200:
	ok
	I1028 11:57:02.976340    2396 status.go:463] multinode-461300 apiserver status = Running (err=<nil>)
	I1028 11:57:02.976394    2396 status.go:176] multinode-461300 status: &{Name:multinode-461300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:57:02.976428    2396 status.go:174] checking status of multinode-461300-m02 ...
	I1028 11:57:02.995932    2396 cli_runner.go:164] Run: docker container inspect multinode-461300-m02 --format={{.State.Status}}
	I1028 11:57:03.074512    2396 status.go:371] multinode-461300-m02 host status = "Running" (err=<nil>)
	I1028 11:57:03.074512    2396 host.go:66] Checking if "multinode-461300-m02" exists ...
	I1028 11:57:03.085338    2396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461300-m02
	I1028 11:57:03.152407    2396 host.go:66] Checking if "multinode-461300-m02" exists ...
	I1028 11:57:03.169451    2396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:57:03.179439    2396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461300-m02
	I1028 11:57:03.268516    2396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61576 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-461300-m02\id_rsa Username:docker}
	I1028 11:57:03.394322    2396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:57:03.419286    2396 status.go:176] multinode-461300-m02 status: &{Name:multinode-461300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:57:03.419440    2396 status.go:174] checking status of multinode-461300-m03 ...
	I1028 11:57:03.445217    2396 cli_runner.go:164] Run: docker container inspect multinode-461300-m03 --format={{.State.Status}}
	I1028 11:57:03.518567    2396 status.go:371] multinode-461300-m03 host status = "Stopped" (err=<nil>)
	I1028 11:57:03.518567    2396 status.go:384] host is not running, skipping remaining checks
	I1028 11:57:03.518567    2396 status.go:176] multinode-461300-m03 status: &{Name:multinode-461300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.71s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (18.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 node start m03 -v=7 --alsologtostderr
E1028 11:57:17.713252   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 node start m03 -v=7 --alsologtostderr: (16.4307283s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 status -v=7 --alsologtostderr: (1.7775782s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (18.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (114.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-461300
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-461300
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-461300: (25.0088988s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-461300 --wait=true -v=8 --alsologtostderr
E1028 11:58:50.178148   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-461300 --wait=true -v=8 --alsologtostderr: (1m28.9964783s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-461300
--- PASS: TestMultiNode/serial/RestartKeepsNodes (114.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 node delete m03: (7.9820104s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr: (1.5162023s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 stop: (23.4713515s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-461300 status: exit status 7 (415.2856ms)

                                                
                                                
-- stdout --
	multinode-461300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-461300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr: exit status 7 (381.3613ms)

                                                
                                                
-- stdout --
	multinode-461300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-461300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:59:50.331220   11232 out.go:345] Setting OutFile to fd 1984 ...
	I1028 11:59:50.397328   11232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:59:50.397328   11232 out.go:358] Setting ErrFile to fd 1452...
	I1028 11:59:50.397328   11232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:59:50.410794   11232 out.go:352] Setting JSON to false
	I1028 11:59:50.410794   11232 mustload.go:65] Loading cluster: multinode-461300
	I1028 11:59:50.410794   11232 notify.go:220] Checking for updates...
	I1028 11:59:50.411582   11232 config.go:182] Loaded profile config "multinode-461300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1028 11:59:50.411582   11232 status.go:174] checking status of multinode-461300 ...
	I1028 11:59:50.431967   11232 cli_runner.go:164] Run: docker container inspect multinode-461300 --format={{.State.Status}}
	I1028 11:59:50.501887   11232 status.go:371] multinode-461300 host status = "Stopped" (err=<nil>)
	I1028 11:59:50.501937   11232 status.go:384] host is not running, skipping remaining checks
	I1028 11:59:50.501937   11232 status.go:176] multinode-461300 status: &{Name:multinode-461300 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:59:50.502033   11232 status.go:174] checking status of multinode-461300-m02 ...
	I1028 11:59:50.517810   11232 cli_runner.go:164] Run: docker container inspect multinode-461300-m02 --format={{.State.Status}}
	I1028 11:59:50.584370   11232 status.go:371] multinode-461300-m02 host status = "Stopped" (err=<nil>)
	I1028 11:59:50.584427   11232 status.go:384] host is not running, skipping remaining checks
	I1028 11:59:50.584427   11232 status.go:176] multinode-461300-m02 status: &{Name:multinode-461300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (67.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-461300 --wait=true -v=8 --alsologtostderr --driver=docker
E1028 12:00:20.801035   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-461300 --wait=true -v=8 --alsologtostderr --driver=docker: (1m6.0105983s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-461300 status --alsologtostderr: (1.3345907s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (67.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (66.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-461300
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-461300-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-461300-m02 --driver=docker: exit status 14 (263.2771ms)

                                                
                                                
-- stdout --
	* [multinode-461300-m02] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-461300-m02' is duplicated with machine name 'multinode-461300-m02' in profile 'multinode-461300'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-461300-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-461300-m03 --driver=docker: (1m0.7659512s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-461300
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-461300: exit status 80 (999.2894ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-461300 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-461300-m03 already exists in multinode-461300-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_33.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-461300-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-461300-m03: (4.0582455s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (66.32s)

                                                
                                    
x
+
TestPreload (158s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-612300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E1028 12:02:17.723586   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:03:33.271066   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:03:50.187475   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-612300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (1m43.4537433s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-612300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-612300 image pull gcr.io/k8s-minikube/busybox: (2.0418362s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-612300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-612300: (12.0404562s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-612300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-612300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (35.6135592s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-612300 image list
helpers_test.go:175: Cleaning up "test-preload-612300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-612300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-612300: (4.2439796s)
--- PASS: TestPreload (158.00s)

                                                
                                    
x
+
TestScheduledStopWindows (130.32s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-122000 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-122000 --memory=2048 --driver=docker: (1m1.7965459s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-122000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-122000 --schedule 5m: (1.3394123s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-122000 -n scheduled-stop-122000
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-122000 -n scheduled-stop-122000: (1.002234s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-122000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-122000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-122000 --schedule 5s: (1.7103123s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-122000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-122000: exit status 7 (328.5555ms)

                                                
                                                
-- stdout --
	scheduled-stop-122000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-122000 -n scheduled-stop-122000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-122000 -n scheduled-stop-122000: exit status 7 (301.3664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-122000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-122000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-122000: (2.953213s)
--- PASS: TestScheduledStopWindows (130.32s)

                                                
                                    
x
+
TestInsufficientStorage (42.07s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-021100 --memory=2048 --output=json --wait=true --driver=docker
E1028 12:07:17.733406   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-021100 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (37.4684084s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f4c96613-fb5c-4d63-8fd0-caf9c787645a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-021100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9bb42d65-2588-4ef6-adcb-45be28985b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"85ead6fe-0fe0-473b-a02b-675332daae89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4247728a-e969-4eda-9532-f2e783eee95c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"17a49c64-b1ff-4c97-8c3d-c3575df234d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19875"}}
	{"specversion":"1.0","id":"fb42adb9-cea9-4bb7-8675-3ebae517d6d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8f67006a-857d-464e-8668-72e368cda65c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"526325ab-49a9-4620-bb6d-5aa6e72d286e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f0ce77f2-3dba-4bad-82ef-0340aa3c8cb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4a25db8-77db-417a-9d2c-235b56c59921","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"c5c606a0-43aa-4cb7-a01a-3042b12487ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-021100\" primary control-plane node in \"insufficient-storage-021100\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9864ccf3-11ad-4a40-bccb-c6e63cffe00a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1729876044-19868 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b86c0e32-8377-469c-820b-096f77bd2cda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcc3eeb0-a818-4838-b42a-5c163ac9207c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-021100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-021100 --output=json --layout=cluster: exit status 7 (777.3603ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-021100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-021100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:07:38.606859    7216 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-021100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-021100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-021100 --output=json --layout=cluster: exit status 7 (773.1795ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-021100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-021100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:07:39.378774    3252 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-021100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1028 12:07:39.412254    3252 status.go:258] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-021100\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-021100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-021100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-021100: (3.0477335s)
--- PASS: TestInsufficientStorage (42.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (194.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.2467406047.exe start -p running-upgrade-324600 --memory=2200 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.2467406047.exe start -p running-upgrade-324600 --memory=2200 --vm-driver=docker: (1m50.3436258s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-324600 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-324600 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m18.3376629s)
helpers_test.go:175: Cleaning up "running-upgrade-324600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-324600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-324600: (5.3792989s)
--- PASS: TestRunningBinaryUpgrade (194.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (239.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-159000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
E1028 12:12:17.744987   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-159000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (2m12.4618392s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-159000
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-159000: (2.6475106s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-159000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-159000 status --format={{.Host}}: exit status 7 (350.3001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-159000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-159000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker: (53.08991s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-159000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-159000 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-159000 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (540.6054ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-159000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-159000
	    minikube start -p kubernetes-upgrade-159000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1590002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-159000 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-159000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-159000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker: (42.7216838s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-159000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-159000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-159000: (7.8535238s)
--- PASS: TestKubernetesUpgrade (239.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (333.7s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.1703621897.exe start -p missing-upgrade-512000 --memory=2200 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.1703621897.exe start -p missing-upgrade-512000 --memory=2200 --driver=docker: (2m43.7996774s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-512000
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-512000: (21.6077075s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-512000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-512000 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-512000 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m22.7426269s)
helpers_test.go:175: Cleaning up "missing-upgrade-512000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-512000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-512000: (4.6487913s)
--- PASS: TestMissingContainerUpgrade (333.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (319.7383ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-592300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5073 Build 19045.5073
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19875
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --driver=docker: (1m32.2834618s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-592300 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-592300 status -o json: (1.3574109s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --no-kubernetes --driver=docker: (23.5940561s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-592300 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-592300 status -o json: exit status 2 (935.195ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-592300","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-592300
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-592300: (5.800215s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (316.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.189798215.exe start -p stopped-upgrade-558300 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.189798215.exe start -p stopped-upgrade-558300 --memory=2200 --vm-driver=docker: (3m49.0684671s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.189798215.exe -p stopped-upgrade-558300 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.189798215.exe -p stopped-upgrade-558300 stop: (21.5643943s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-558300 --memory=2200 --alsologtostderr -v=1 --driver=docker
E1028 12:13:50.220132   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-558300 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m5.503927s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (316.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --no-kubernetes --driver=docker: (28.1753722s)
--- PASS: TestNoKubernetes/serial/Start (28.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-592300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-592300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (974.0085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.679092s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.712733s)
--- PASS: TestNoKubernetes/serial/ProfileList (5.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-592300
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-592300: (2.3716416s)
--- PASS: TestNoKubernetes/serial/Stop (2.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (14.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-592300 --driver=docker: (14.4098964s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (14.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-592300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-592300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (890.7378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (4.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-558300
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-558300: (4.2535334s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (4.25s)

                                                
                                    
x
+
TestPause/serial/Start (108.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-219100 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-219100 --memory=2048 --install-addons=false --wait=all --driver=docker: (1m48.0164855s)
--- PASS: TestPause/serial/Start (108.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m38.5849586s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (125.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (2m5.7742575s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (125.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-219100 --alsologtostderr -v=1 --driver=docker
E1028 12:17:00.838482   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:17:17.756151   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-219100 --alsologtostderr -v=1 --driver=docker: (45.5563074s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.57s)

                                                
                                    
x
+
TestPause/serial/Pause (1.44s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-219100 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-219100 --alsologtostderr -v=5: (1.4355147s)
--- PASS: TestPause/serial/Pause (1.44s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.9s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-219100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-219100 --output=json --layout=cluster: exit status 2 (899.2769ms)

                                                
                                                
-- stdout --
	{"Name":"pause-219100","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-219100","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.90s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-219100 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-219100 --alsologtostderr -v=5: (1.4338882s)
--- PASS: TestPause/serial/Unpause (1.43s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-219100 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-219100 --alsologtostderr -v=5: (1.6378399s)
--- PASS: TestPause/serial/PauseAgain (1.64s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.52s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-219100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-219100 --alsologtostderr -v=5: (5.5228819s)
--- PASS: TestPause/serial/DeletePaused (5.52s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.1805124s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-219100
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-219100: exit status 1 (83.8574ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-219100: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (176.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (2m56.4508969s)
--- PASS: TestNetworkPlugins/group/calico/Start (176.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (111.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m51.6611s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (111.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-455700 "pgrep -a kubelet"
I1028 12:17:55.882823   11176 config.go:182] Loaded profile config "auto-455700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (30.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-455700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-455700 replace --force -f testdata\netcat-deployment.yaml: (1.0636615s)
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s45nb" [834565e8-47da-473e-a6b3-264e9fd34c78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s45nb" [834565e8-47da-473e-a6b3-264e9fd34c78] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 29.0091945s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (30.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l4rpm" [7c3cbc3b-e375-4d31-83b2-6112813bedb8] Running
E1028 12:18:50.220527   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0085472s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-455700 "pgrep -a kubelet"
I1028 12:18:52.904215   11176 config.go:182] Loaded profile config "kindnet-455700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (19.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-455700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xj5xr" [51b91e6a-50f3-4e0a-a9b4-830aef8af4cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xj5xr" [51b91e6a-50f3-4e0a-a9b4-830aef8af4cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 19.0101983s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (19.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (120.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (2m0.9532794s)
--- PASS: TestNetworkPlugins/group/false/Start (120.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-455700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-455700 "pgrep -a kubelet": (1.2465195s)
I1028 12:19:43.674487   11176 config.go:182] Loaded profile config "custom-flannel-455700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (18.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-455700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zlnlv" [3f9bf46b-e214-495e-a3f5-0e49ac998b06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zlnlv" [3f9bf46b-e214-495e-a3f5-0e49ac998b06] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 17.0089858s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (18.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E1028 12:20:13.309630   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m20.8122445s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-mkl8r" [67161b30-fdbc-4f4d-990a-3f7659c06f32] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0092854s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-455700 "pgrep -a kubelet"
I1028 12:20:51.977588   11176 config.go:182] Loaded profile config "calico-455700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (22.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-455700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l5j9g" [52f4dedf-9739-49c6-8f7f-d695e8d0f34a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l5j9g" [52f4dedf-9739-49c6-8f7f-d695e8d0f34a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 22.0116526s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (22.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (110s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m49.9959713s)
--- PASS: TestNetworkPlugins/group/flannel/Start (110.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-455700 "pgrep -a kubelet"
I1028 12:21:17.388586   11176 config.go:182] Loaded profile config "false-455700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (21.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-455700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context false-455700 replace --force -f testdata\netcat-deployment.yaml: (1.3466984s)
I1028 12:21:18.776246   11176 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kfgnq" [89800ccd-41cf-40aa-a389-a32f795d51cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kfgnq" [89800ccd-41cf-40aa-a389-a32f795d51cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 20.0087124s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (21.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-455700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-455700 "pgrep -a kubelet": (1.0536825s)
I1028 12:21:29.522398   11176 config.go:182] Loaded profile config "enable-default-cni-455700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-455700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9c26t" [8bac0e33-1d09-4334-9bcd-67e073f83159] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9c26t" [8bac0e33-1d09-4334-9bcd-67e073f83159] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 18.0291615s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (116.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E1028 12:22:17.766662   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m56.6142958s)
--- PASS: TestNetworkPlugins/group/bridge/Start (116.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (129.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-455700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (2m9.5938819s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (129.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (226.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-013200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-013200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (3m46.9042653s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (226.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9sknq" [37c3b195-75a5-4d58-9165-abb8096e74f9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0088375s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-455700 "pgrep -a kubelet"
I1028 12:22:51.048654   11176 config.go:182] Loaded profile config "flannel-455700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (29.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-455700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rgptb" [4503604b-34d4-4e0c-8061-51a209229079] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 12:22:56.974490   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:56.981509   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:56.993520   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:57.015574   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:57.057920   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:57.140035   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:57.301935   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:57.623931   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:58.267021   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:22:59.549529   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:23:02.111923   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:23:07.234848   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rgptb" [4503604b-34d4-4e0c-8061-51a209229079] Running
E1028 12:23:17.477591   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 29.0093666s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (29.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (2.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-455700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-455700 "pgrep -a kubelet": (1.010304s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (2.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (21.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-455700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-krzcx" [305dceaa-4636-40b4-8e55-fd1ea12aa15e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-krzcx" [305dceaa-4636-40b4-8e55-fd1ea12aa15e] Running
E1028 12:24:26.986517   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 21.0072032s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (21.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (135.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-889700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.2
E1028 12:24:18.924701   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-889700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.2: (2m15.9399398s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (135.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-455700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-455700 "pgrep -a kubelet": (1.0288763s)
I1028 12:24:44.262419   11176 config.go:182] Loaded profile config "kubenet-455700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (22.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-455700 replace --force -f testdata\netcat-deployment.yaml
E1028 12:24:44.395429   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:44.403442   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:44.416414   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:44.439435   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:44.482875   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:44.565420   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:44.727740   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:45.050757   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q2bl8" [0008f97f-a52f-4cf3-a903-1cdc50dd8712] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 12:24:45.693569   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:46.976437   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:49.539248   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:24:54.662282   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-q2bl8" [0008f97f-a52f-4cf3-a903-1cdc50dd8712] Running
E1028 12:25:04.904532   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 22.0084147s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (22.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-455700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1028 12:25:07.950647   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-455700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (120.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-232900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.2
E1028 12:25:40.854006   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:45.182143   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:45.190135   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:45.203141   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:45.226164   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:45.269145   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:45.352166   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:45.515146   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:45.838205   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:46.481041   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:47.763734   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:25:50.325727   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-232900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.2: (2m0.548968s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (120.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (104.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-473100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.2
E1028 12:26:05.691737   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:06.352324   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:18.762861   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:18.770150   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:18.782769   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:18.805124   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:18.848081   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:18.931137   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:19.093721   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:19.416537   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:20.058801   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:21.341529   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:23.903729   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:26.174837   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-473100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.2: (1m44.0891735s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (104.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-013200 create -f testdata\busybox.yaml
E1028 12:26:29.026707   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a142a95e-621e-4b94-9a70-fb2c54567400] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1028 12:26:29.877367   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:30.150303   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:30.158310   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:30.171310   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:30.194303   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:30.236179   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:30.318189   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:30.481575   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:30.805661   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:26:31.448825   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "busybox" [a142a95e-621e-4b94-9a70-fb2c54567400] Running
E1028 12:26:35.294114   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0120489s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-013200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-889700 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c32db63b-e740-4436-9d20-517b8542b6f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1028 12:26:32.731718   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "busybox" [c32db63b-e740-4436-9d20-517b8542b6f9] Running
E1028 12:26:39.269658   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0071956s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-889700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-013200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1028 12:26:40.417283   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-013200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.2408065s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-013200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-013200 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-013200 --alsologtostderr -v=3: (12.4141757s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-889700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-889700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.4481665s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-889700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-889700 --alsologtostderr -v=3
E1028 12:26:50.660371   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-889700 --alsologtostderr -v=3: (12.3579392s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-013200 -n old-k8s-version-013200
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-013200 -n old-k8s-version-013200: exit status 7 (326.5596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-013200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-889700 -n no-preload-889700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-889700 -n no-preload-889700: exit status 7 (395.172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-889700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (295.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-889700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.2
E1028 12:26:59.752959   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:07.138764   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:11.144232   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:17.779203   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-928900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:28.277651   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-889700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.2: (4m54.7852422s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-889700 -n no-preload-889700
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (295.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-232900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7c638b31-5b2f-4273-9241-6743e289c884] Pending
helpers_test.go:344: "busybox" [7c638b31-5b2f-4273-9241-6743e289c884] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7c638b31-5b2f-4273-9241-6743e289c884] Running
E1028 12:27:40.717413   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 14.011145s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-232900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (14.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-232900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1028 12:27:44.281105   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:44.289106   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:44.302097   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:44.325114   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:44.367106   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:44.450096   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:44.612895   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:44.936057   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:45.577914   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-232900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.0980318s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-232900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-473100 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ffaf8aa-c990-44b5-a4df-565d5cc62e18] Pending
helpers_test.go:344: "busybox" [5ffaf8aa-c990-44b5-a4df-565d5cc62e18] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1028 12:27:49.424278   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:52.108855   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "busybox" [5ffaf8aa-c990-44b5-a4df-565d5cc62e18] Running
E1028 12:27:54.547547   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:27:56.986864   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.0138167s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-473100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-232900 --alsologtostderr -v=3
E1028 12:27:46.861168   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-232900 --alsologtostderr -v=3: (13.0118189s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-473100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-473100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9505882s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-473100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-232900 -n embed-certs-232900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-232900 -n embed-certs-232900: exit status 7 (479.6816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-232900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (294.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-232900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-232900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.2: (4m53.5854077s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-232900 -n embed-certs-232900
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (294.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-473100 --alsologtostderr -v=3
E1028 12:28:04.790233   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-473100 --alsologtostderr -v=3: (13.0732124s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100: exit status 7 (508.7316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-473100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (292.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-473100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.2
E1028 12:28:24.702926   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:28:25.273601   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:28:29.064563   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:28:46.012953   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:28:50.243804   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:02.643891   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:06.237795   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:10.916113   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:10.923128   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:10.936112   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:10.959113   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:11.001355   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:11.087012   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:11.249308   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:11.571872   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:12.214377   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:13.496673   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:13.726425   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:14.034671   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:16.058961   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:21.180238   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:31.422825   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:44.407093   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:45.106552   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:45.114561   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:45.127560   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:45.150580   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:45.192969   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:45.275234   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:45.437272   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:45.760070   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:46.402718   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:47.684847   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:50.247733   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:51.906403   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:29:55.370318   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:30:05.613373   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:30:12.125993   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:30:26.097199   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:30:28.164199   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:30:32.870813   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:30:45.194984   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:31:07.060823   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:31:12.914246   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:31:18.780620   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:31:30.162387   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:31:46.492937   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-473100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.2: (4m51.8075032s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (292.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-znl4p" [f8b689f2-a530-4772-adf5-bf02de355fa1] Running
E1028 12:31:54.797332   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:31:57.884266   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0142963s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-znl4p" [f8b689f2-a530-4772-adf5-bf02de355fa1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0100167s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-889700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-889700 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-889700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-889700 --alsologtostderr -v=1: (1.5414372s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-889700 -n no-preload-889700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-889700 -n no-preload-889700: exit status 2 (883.559ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-889700 -n no-preload-889700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-889700 -n no-preload-889700: exit status 2 (893.1597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-889700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-889700 --alsologtostderr -v=1: (1.3541868s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-889700 -n no-preload-889700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-889700 -n no-preload-889700: (1.4218242s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-889700 -n no-preload-889700
--- PASS: TestStartStop/group/no-preload/serial/Pause (7.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (72.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-177500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.2
E1028 12:32:28.986854   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:32:44.292505   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-177500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.2: (1m12.5761844s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (72.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hf9z5" [1e175150-03ed-4398-9993-bc452b1c5f1e] Running
E1028 12:32:56.998775   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0117337s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hf9z5" [1e175150-03ed-4398-9993-bc452b1c5f1e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0114902s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-232900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-232900 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-232900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-232900 --alsologtostderr -v=1: (1.665013s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-232900 -n embed-certs-232900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-232900 -n embed-certs-232900: exit status 2 (962.2514ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-232900 -n embed-certs-232900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-232900 -n embed-certs-232900: exit status 2 (895.9439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-232900 --alsologtostderr -v=1
E1028 12:33:12.012920   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-232900 --alsologtostderr -v=1: (1.3618679s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-232900 -n embed-certs-232900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-232900 -n embed-certs-232900: (1.380891s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-232900 -n embed-certs-232900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-232900 -n embed-certs-232900: (1.1061538s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (7.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wngbl" [4755bcd4-99fd-4f5b-9e29-853e82ead4d2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00912s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wngbl" [4755bcd4-99fd-4f5b-9e29-853e82ead4d2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0129677s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-473100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-473100 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-473100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-473100 --alsologtostderr -v=1: (1.4750702s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100: exit status 2 (938.3573ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100: exit status 2 (965.5607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-473100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-473100 --alsologtostderr -v=1: (1.4379325s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100: (1.3199132s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-473100 -n default-k8s-diff-port-473100: (1.0246319s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-177500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-177500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.446781s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-177500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-177500 --alsologtostderr -v=3: (7.6420485s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-177500 -n newest-cni-177500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-177500 -n newest-cni-177500: exit status 7 (353.3871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-177500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-177500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-177500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.2: (31.3508256s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-177500 -n newest-cni-177500
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-177500 -n newest-cni-177500: (1.1796379s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gd2xl" [7a8e1232-0374-42cb-8b9c-efe53a947fe4] Running
E1028 12:33:46.025385   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-455700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1028 12:33:50.256338   11176 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-740500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0067857s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gd2xl" [7a8e1232-0374-42cb-8b9c-efe53a947fe4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0079329s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-013200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-013200 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-013200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-013200 --alsologtostderr -v=1: (1.5477708s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013200 -n old-k8s-version-013200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013200 -n old-k8s-version-013200: exit status 2 (999.9659ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-013200 -n old-k8s-version-013200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-013200 -n old-k8s-version-013200: exit status 2 (973.489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-013200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-013200 --alsologtostderr -v=1: (1.4408354s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013200 -n old-k8s-version-013200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013200 -n old-k8s-version-013200: (1.3437089s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-013200 -n old-k8s-version-013200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-013200 -n old-k8s-version-013200: (1.1335526s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (7.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-177500 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (8.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-177500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-177500 --alsologtostderr -v=1: (1.6330862s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-177500 -n newest-cni-177500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-177500 -n newest-cni-177500: exit status 2 (962.2589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-177500 -n newest-cni-177500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-177500 -n newest-cni-177500: exit status 2 (1.0769989s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-177500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-177500 --alsologtostderr -v=1: (1.8649075s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-177500 -n newest-cni-177500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-177500 -n newest-cni-177500: (1.6734219s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-177500 -n newest-cni-177500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-177500 -n newest-cni-177500: (1.5813048s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (8.79s)

                                                
                                    

Test skip (26/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 8.1991ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-6cz4m" [1315123c-d305-41f4-a242-035c0907c8ae] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009239s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lvtfk" [0e1dc02d-d824-46be-a6e6-fa6ee0b2db7f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0110044s
addons_test.go:331: (dbg) Run:  kubectl --context addons-740500 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-740500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-740500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.0916268s)
addons_test.go:346: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable registry --alsologtostderr -v=1: (1.4421977s)
--- SKIP: TestAddons/parallel/Registry (23.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-740500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-740500 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-740500 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [69239fee-9320-4de3-a389-29343e397fe0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [69239fee-9320-4de3-a389-29343e397fe0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.0151693s
I1028 11:10:45.779233   11176 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable ingress-dns --alsologtostderr -v=1: (2.6740825s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-740500 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-740500 addons disable ingress --alsologtostderr -v=1: (9.4077995s)
--- SKIP: TestAddons/parallel/Ingress (26.12s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-928900 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-928900 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 16152: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-928900 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-928900 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qjnts" [9b69527f-fbcd-403e-8897-53f075651ae2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qjnts" [9b69527f-fbcd-403e-8897-53f075651ae2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.0084331s
functional_test.go:1646: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (11.62s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (13.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-455700 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-455700" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-455700

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-455700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455700"

                                                
                                                
----------------------- debugLogs end: cilium-455700 [took: 13.1667207s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-455700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-455700
--- SKIP: TestNetworkPlugins/group/cilium (13.81s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-448700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-448700
--- SKIP: TestStartStop/group/disable-driver-mounts (0.88s)

                                                
                                    
Copied to clipboard